The machine we cannot stop
Artificial intelligence is advancing at a speed that politics is neither equipped to understand nor to govern. But the deeper problem is not technological: it is our collective inability to imagine an alternative.
In 1964, Herbert Marcuse wrote that advanced capitalist societies had developed an extraordinary capacity: the ability to integrate and neutralise any form of opposition. Dissent was absorbed, transformed into a commodity, reintegrated into the very system it was meant to challenge. Sixty years later, this logic operates with surgical precision at the heart of the artificial intelligence revolution. Criticism of AI becomes a podcast. Concern for workers becomes ethical marketing. The urgency of change is domesticated into corporate panels and government white papers. And meanwhile, the machine advances.
The question we are asking is not new, but it has become brutally urgent: why do we feel incapable of changing anything in the face of such a radical transformation? The answer, as we shall see, is not psychological. It is political. And it has a history.
Capitalist realism in the age of the algorithm
Mark Fisher, in his 2009 essay Capitalist Realism, opened with a blunt observation attributed to Fredric Jameson and Slavoj Žižek: "It is easier to imagine the end of the world than the end of capitalism." Fisher did not intend this as an intellectual provocation. He meant it as a clinical diagnosis of an era that had stopped believing in the future.
Today that sentence could be rewritten as follows: it is easier to imagine an artificial intelligence that replaces the entire global workforce than to imagine a politics capable of governing it. AI does not present itself as a choice between possible options. It presents itself, exactly as the capitalism Fisher described, as reality itself: inevitable, natural, necessary. This is not an ideology that defends itself with arguments. It is an atmosphere that has already occupied all available space in the thinkable.
Capitalist realism is not one ideology among others: it is the suppression of ideology itself, the naturalisation of a contingent order as if it were the only possible one.
— Mark Fisher, Capitalist Realism (2009)
The major technology companies know this, and they build their rhetoric upon it. OpenAI, Google, Anthropic never claim that AI is desirable. They claim it is inevitable. The race toward artificial general intelligence is presented as a law of nature, not as a human decision made by a small number of people in a small number of rooms. This is the most sophisticated form of capitalist realism applied to technology: rendering invisible the subject who decides.
The labour market as a silent battlefield
Projections vary — the McKinsey Global Institute estimates that between 75 and 375 million workers globally may need to change occupational categories by 2030 — but on one point they converge: the transformation underway is not incremental. It is structural.
This is not simply automation, the phenomenon we already experienced with industrial mechanisation. The qualitative leap of contemporary artificial intelligence is that it does not merely replace physical and repetitive labour: it penetrates cognitive, creative, and relational work. The accountant, the lawyer, the journalist, the programmer, the GP. Professions that for decades represented the promise of social mobility — that told us: study, specialise, and you will be safe.
David Graeber, in his essential Bullshit Jobs (2018), had already exposed an uncomfortable truth: a significant proportion of existing jobs are objectively pointless, and the people doing them know it. The system maintains them not because they are necessary for production, but because work performs a function of social discipline: it occupies time, structures identity, produces docility. With AI attacking even these phantom jobs, the system loses one of its most effective mechanisms of control — without anyone having a plan for what to put in its place.
But there is a second, darker layer, which Graeber had identified in Debt: The First 5,000 Years (2011). Debt — student, mortgage, consumer — is the most powerful instrument of political neutralisation that capitalism has ever invented. An indebted person cannot afford to take risks. Cannot afford to stop working, even if the work is degrading, useless, or on the verge of obsolescence. Debt transforms precarity into a voluntary prison. And in a labour market dismantled by AI, that prison tightens.
Exhaustion as a form of control
Byung-Chul Han, the Korean-German philosopher, offered in The Burnout Society (2010) perhaps the most precise diagnosis of our subjective condition. The novelty of contemporary capitalism, Han writes, is not repression but self-coercion. We are not exploited by an external master: we are entrepreneurs of ourselves, which is to say, our own worst exploiters. Freedom — that freedom of which we are so proud — is the most refined form of control, because it transfers the responsibility for oppression onto the oppressed.
AI does not interrupt this cycle: it accelerates it to the point of paroxysm. The knowledge worker of 2026 lives in a state of permanent optimisation. Updating skills, following courses, learning new tools — not out of desire, but out of necessity, on pain of obsolescence. There is no mental space left to ask whether this system makes sense, because all cognitive energy is absorbed by the urgency of surviving it. Han calls this state performance fatigue: one is not repressed, one is exhausted. And an exhausted person does not imagine revolutions.
The achievement-subject believes itself to be free, but in reality it is a slave without a visible master. It is simultaneously jailer and victim.
— Byung-Chul Han, The Burnout Society (2010)
This exhaustion is not accidental: it is functional. A worker frantically chasing AI in order not to be replaced has neither the time nor the energy to organise, to engage in politics, to imagine a different future. Technological acceleration and individual exhaustion feed each other in a cycle that is, structurally, a cycle of control.
Politics in the age of algorithmic hegemony
Antonio Gramsci wrote in the Prison Notebooks that the dominance of a class is exercised not only through economic force, but through cultural consent: the capacity to make one's own values appear universal, one's own order appear natural, one's own "common sense" appear as the only possible sense. This is what Gramsci called hegemony.
Today, algorithmic hegemony operates in an even more subterranean way. It is not that the major technology companies explicitly convince us that AI is good. It is that they shape us — through the products we use every day, through the platforms that mould our attention, through the interfaces that structure our ways of thinking — to take for granted that computational optimisation is the measure of all things. The value of a human being becomes measurable. Their productivity, monitorable. Their future, predictable.
Wendy Brown, in Undoing the Demos (2015), showed how neoliberalism has transformed political subjectivity: we no longer think of ourselves as citizens but as human capital. Every relationship, every choice, every identity is evaluated in terms of investment and return. When you are human capital, politics loses meaning — because politics is collective and contractual, while human capital is atomised and competitive. AI radicalises this transformation: in a world where every gesture produces data and every datum feeds a model that evaluates you, political subjectivity erodes still further.
The slowness of politics in the face of the machine's velocity
The problem is not that legislators are stupid or corrupt, though examples of this are not lacking. The problem is structural: democratic institutions are designed to manage slow change. Electoral cycles of four or five years, parliamentary processes lasting months, international negotiations lasting years. Artificial intelligence changes in weeks.
When the European Parliament passed the AI Act in March 2024, GPT-4 was already obsolete. By the time its most stringent provisions come into force, the models it regulates will likely be three generations old. Regulation chases technology with the breathless effort of someone running on a treadmill that keeps accelerating.
Fisher had identified something similar in post-Thatcherite bureaucratic vampirism: the paradox by which reforms that promised less state had instead produced an explosion of controls, audits, and accountability requirements — an enormous bureaucratic machine that consumed energy without producing anything real. Today we see something analogous in AI governance: a proliferation of committees, task forces, white papers, ethical principles, accountability frameworks — documents produced, presented at conferences, filed, and then forgotten, while the models continue to train.
Fredric Jameson, the theorist from whom Fisher drew inspiration, had spoken of the political unconscious: economic structures manifest themselves in culture without our noticing. Our collective inability to imagine real governance of AI is not technical ignorance. It is a political symptom: we have so deeply internalised the logic of the market that we cannot conceive of a public authority that places effective limits on technological capital. The limit seems, literally, unthinkable.
Cynicism as adaptation — and as trap
Paolo Virno, the Italian philosopher, analysed in A Grammar of the Multitude (2002) a mechanism that defines our relationship with the system: cynicism. People, Virno writes, know perfectly well that the system is unjust and absurd. They are not deceived by it. They simply act as if they did not know. Cynicism is not ignorance: it is a form of protection against powerlessness. If I do not really believe, I cannot really be disappointed.
In the debate on AI, cynicism takes two complementary forms. The first belongs to the techno-optimists: AI will create more jobs than it destroys, as has always happened with industrial revolutions. The second belongs to the resigned techno-pessimists: it cannot be stopped, there is no point resisting. Both positions lead to the same practical outcome: inaction. The first cynicism is consolatory, the second is paralytic, but the result is identical: nothing changes.
Cynicism is not naivety turned inside out: it is the form that realism takes when it has given up on changing the world.
— Paolo Virno, A Grammar of the Multitude (2002)
The sentence that opens every public conversation about AI — "but what can I do on my own?" — is the perfect formulation of this adaptive cynicism. It is not false: an isolated individual genuinely has minimal power in the face of systemic structures. But it is also a politically functional narrative: it keeps each person in their isolation, and converts that isolation into collective resignation.
Rebuilding political imagination
Axel Honneth, in his theory of the struggle for recognition, argues that change never starts from abstract revolutions but from something far more concrete: the claim to dignity. Every significant social movement in modern history — the labour movement, feminism, civil rights movements — began with people who refused to be invisible. Who said: this is reality, and it is intolerable.
The workers' movement in the face of AI is still searching for this collective awareness. Analyses are not lacking, data are not lacking, technical proposals are not lacking — universal basic income, robot taxation, public ownership of foundation models, democratic governance of data. What is lacking is the capacity to transform individual understanding into collective will.
Gramsci called this capacity counter-hegemony: the construction of an alternative cultural bloc, a new shared "common sense," which precedes and makes possible any concrete political transformation. This is not nostalgia for past models — neither for twentieth-century socialism nor for a Fordist capitalism that will not return. It requires something more difficult: inventing a new vocabulary, a new imagination, for a problem that has no direct historical precedent.
Mark Fisher, before his death in 2017, had identified in the post-punk of the 1970s a rare moment when popular culture had produced genuine formal novelties — sounds that had not existed before, that projected an open future. The pervasive nostalgia of our era, its inability to produce anything genuinely new, was for Fisher a precise political symptom: a culture that cannot imagine the future is a culture that has accepted its own powerlessness as a permanent condition.
Conclusion: the problem is real, the powerlessness is not
Artificial intelligence is a real force, with real effects on labour, on the distribution of wealth, on the structure of power. It is not an illusion manufactured by alarmists. But the sense of powerlessness it produces — that widespread feeling that nothing can be done, that change is as inevitable as a weather event — that, at least in part, is a construction. A politically useful construction for those who control the profits of this change.
Marcuse taught us that the system absorbs every rebellion. Gramsci taught us that before changing the world one must change the way one thinks about it. Han told us we are too exhausted to imagine alternatives. Graeber showed us that debt makes us docile. Brown explained that we think as capital, not as citizens. Virno warned us that cynicism is our defence — and our trap. Jameson and Fisher reminded us that the inability to imagine the future is itself a political act, a surrender.
Politics is slow. Institutions are inadequate. The balance of power is unfavourable. All of this is true. But it is also true that none of the changes we now take for granted — workers' rights, universal suffrage, public healthcare, the very idea of a regulated economy — seemed possible until they became real. They did not seem possible because someone had rendered them unthinkable.
The machine that seems impossible to stop is not artificial intelligence. It is our inability to imagine that we still have, collectively, the power to decide how to use it.