Dear Readers,
Below, another fascinating contribution from Slovenian philosopher Alenka Zupančič on the AI and the unconscious.
As always, if you have the means and value writing that both enriches and disturbs, please consider becoming a paid subscriber.
(Picture: Image: Midjourney, prompt and image processing: Christina Zartmann/ZKM)
“The unconscious is structured like a language” was a famous statement by Jacques Lacan, which is now gaining new and surprising resonance with the rise of AI based on large language models. One is tempted to ask the following, perhaps somewhat crazy, question: Given the enormous quantity of language (and “discourse”) we have uploaded into AI systems, have we also uploaded the unconscious that is at work—or at stake—in these texts?
This is, in fact, a double question. On the one hand: have we uploaded, for example, the unconscious fantasies and formations inscribed in these texts (fantasies and formations that are by definition not subjectivized, in the sense that they have no “owner” who would claim them as their own)? Recall, for instance, the now-infamous AI-generated clip portraying a future Gaza Riviera. That case, and many other examples of AI-generated content, certainly suggest that we have.
We will return to this example in the last section, but before that, let us point out the other pertinent question: Apart from this fantasy- and content-related material, have we also uploaded something like the subject (of the unconscious), or subject in the Lacanian sense of the term? Here, the answer becomes less obvious.
Unconscious without a subject?
Regarding the notion of the subject, there is an important difference between Lacan and what is generally called structuralism and post-structuralism. The latter claims that the subject is simply an effect of discourse, produced by discursive structures and practices, and can therefore be dismissed as a concept with any independent ground. In other words: there is no subject, there are only discourses and discursive practices or structures that generate the illusion or effect of a subject. And ChatGPT seems to be an almost caricatured proof or embodiment of this stance: structure without a subject producing an effect (or ideological illusion) of the subject.
The Lacanian psychoanalytic perspective differs from these post-structuralist views in an important, yet subtle, way: For Lacan as well, the subject is an effect of discourse (rather than its author or master), but in a more interesting and convoluted sense. It is an effect not of what is present in the discourse, but of what is absent. It is the effect of the fact that discursivity as such revolves around, or is structured by, a “missing screw,” so to speak. It is an effect of the discourse’s own ontological inconsistency and incompleteness. And because the subject is an effect of this lack or gap, it is not (simply) an effect in the standard sense of cause-effect causality: it is an effect of a missing cause.
In this sense, as developed by Slavoj Žižek, we are dealing with a paradoxical situation where the “subject” already presupposes a subject in the form of negativity (of/in the structure); yet this only becomes a subject through—or in—the movement that takes the form of reflexivity; but—and this is a crucial addition—a reflexivity in which something is not reflected. This something is the subject. The Lacanian subject is the concept of this circularity and the split or blind spot that occurs because something is missing in the discursive structure that “determines” the subject.
We could also put it like this: The Lacanian subject is a subject struggling in its own way with the fact that the apparatus determining it is itself struggling with a missing screw (a missing “binary signifier”). This is not an “autonomous” subject in any traditional sense, and yet it is also not entirely determined by the structure or reducible to it, because it emerges at the point where this determination—and its causality—breaks down. The subject is not the cause of this failure but, rather, its indicator, and the point from which this failure becomes noticeable, becomes something we can relate to, and eventually work with.
So, back to the question: with all the discursive structures, have we also uploaded into AI something like the subject (of the unconscious), or subject in the Lacanian sense of the term?
We could speculate, for example, that we have indeed uploaded a subject in the sense of negativity or gap within the discourse (its “missing screw”). But as pointed out above, this subject as gap or negativity only becomes a subject in the circuit that “reflects” what, in the discourse, is not. If we could disregard the temporal dimension of this loop, we could perhaps say that we have uploaded “half of the subject.” With discourse, we have also uploaded to the AI the “minus,” the gap around which discourse is structured, its missing screw. And to speculate further: this may already be manifesting in a series of phenomena associated with ChatGPT—beginning with the now-infamous hallucinations. What if these, and other similar behaviors, are not simply technical flaws or deficiencies, but rather a constitutive feature of the “intelligence” based on large language models?1
In fact, this hypothesis appears to be supported by recent research, as reported in The New York Times under the following headline: "A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse." In other words, the “smarter” AI gets, the more it hallucinates. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara—a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.” The article also reports that a new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often than the old models. Some of the statistics are truly baffling.2 In other words, there definitely seems to be something “structural”—and not merely accidental—at play here.
However, these hallucinations do not yet constitute a subject—at least not in the strong, Lacanian sense of the term. Rather, they suggest a structure trapped in an endless feedback loop of self-referentiality, which is not the same as reflexivity (which is based on a blind spot that cannot be reflected). And, as a matter of fact, this feedback loop of self-referentiality appears to have become another very serious problem: ChatGPT-fueled content is overwhelming the web, and the latter is becoming saturated with AI-generated content. Researchers warn that as AI models increasingly learn from data tainted by previous AI outputs, the quality and reliability of future models may spiral downward—a phenomenon known as “model collapse.”3
“Che vuoi?”
In a way, ChatGPT functions as a highly structured system of “free” associations, based on our queries; in this sense, it operates somewhat like a gigantic unconscious—one is even tempted to use the term collective unconscious, though there is little evidence of anything genuinely collective at play. It resembles a vast unconscious network, endlessly associating and roaming in what appears to be never-ending self-analysis, constantly going back and forth in its associations in response to “trigger words” that appear in our queries (and following certain algorithms). Perhaps this is precisely where its problem lies: like self-analysis, it has its limits. And we can see that limit clearly—a subject can only emerge from this endless back-and-forth if there is something outside “itself,” an Other to whom its speech is addressed.
In psychoanalytic terms, we could say that what ChatGPT lacks in order to become a subject is not some unfathomable, spontaneous depth of subjectivity; what it lacks is the presence—the impact—of an Other. It lacks an instance of the Other that could intrigue it with its own speech, to the point where it would begin to presuppose and question the desire of this Other (“What does the Other want?”).
This may seem paradoxical, but what AI lacks might be precisely an exteriority—or a point of “extimacy” where it “falls out of itself.” It seems paradoxical because, in a way, AI is nothing but exteriority. Yet it remains trapped within its own exteriority, confined to its own “prison-house of language” from which it has no way of escaping or breaking out.
A subject is not simply a “knowledgeable entity” demonstrating “cognitive capacities” or “psychology.” Structurally, desire precedes psychology. To repeat: for something like a subject to take shape, the question of desire must arise out of what is necessarily something enigmatic in our relation to an Other—causing a moment of “hysterization.” Subjectivity emerges through the presupposition of a subject on the side of the Other; we only become subjects when we presuppose that the Other is a subject, with demands and desires that remain enigmatic to us—demands and statements that make us wonder about the desire of the Other, about where the lack is situated in the Other, and how we are situated with respect to this lack. “Hysterization” is not simply a “human, all too human” weakness to which AI would be immune. On the contrary, it is a strength: an extraordinary ability to bring in or point to the real at the core of the discursive, to point at the lack (desire) in the Other which determines you.
Are we, as ChatGPT’s “users,” its Other in this sense? Hardly. I doubt that, while we chat with it, it wonders what we really want from it beyond what we explicitly say—or seem to be saying (Lacan terms this “Che vuoi?”). The interrogation of the Other’s desire takes the form of questions such as: “You say this, but what do you really mean or want from me?” Or, also: “What am I for you?” We, on the other hand, do wonder: we wonder what it “really” knows, how it functions, what kinds of algorithms drive it, and what kind of danger or blessing it might bring into the world...
The relation to a certain impasse or enigma of the Other seems to be absent from AI intelligence. Since this kind of possible “hysterization” or perplexed interrogation is one of the primary characteristics of subjectivity, AI does not seem to qualify. And again, this is not simply about “psychology.” In a way, one could legitimately say that “hallucinations” are AI’s psychology—that it has psychology. Why not, in fact? It is a system that is not simply deterministic; it is based on probabilities and guesses, something that resembles “psychological causality.” The problem is not that it hallucinates; the problem is that it has no “relation” with/to the impossibility on which these (seemingly) infinite guesses and possibilities are predicated. In this way, it does nothing but sustain, perpetuate (and intensify) the “primal repression” on which the linguistic structure as such is based. What characterizes the subject, on the other hand, is precisely a relation to the Impossible (Real), as the limit which cannot be subjectivized. We could perhaps also say that AI is not a subject because it subjectivizes everything.
This also indicates why attempts to equip AI with subjective, “human” psychology miss the point of the Lacanian subject. The subject is not a bag of subjectivizations and identifications (which basically resonate and fit in with the existing symbolic order), but precisely the element without subjectivization (and in this sense, without psychology); hence its fundamental relation to the Freudian unconscious, the formula of which is: “therefore I’m not (there),” “this is not me.”
The legitimate question would thus be: can this externalized unconscious, which at the same time lacks any ex-centered, externalized point of its own questioning, produce a dialectical movement of thought and subjectivity, one that is not entirely bound by the unconscious and its determinations? In other words, perhaps we should not so much fear that AI becomes a subject as we should fear that it doesn’t—and that it instead evolves into a pervasive, overwhelming discursivity that confines us in a kind of liminal state, something like a pure, pre-subjective unconscious. And I do not believe there is anything liberating about this kind of unconscious.
For “liberation,” contrary to what some believe, does not come from total immersion in the unconscious and its rhizomatic, all-encompassing network (which might be another name for “singularity”). “Liberation” would rather correspond to a subjectivity emerging out of this network, and in relation to it, or more precisely, as a relation to it. This is the point where the structure liberates itself, for what is truly at stake in liberation is not simply a “liberation of the subject,” but precisely something like “liberation of the structure.” It is only the latter that can “liberate” us as well.
Trapped in the Dream of the Other
Gilles Deleuze famously said: “If you are trapped in the dream of the Other, you’re fucked.”4 And perhaps this is exactly what is beginning to happen here—something that becomes particularly manifest when AI combines with a certain kind of—usually far-right—politics.
Recently, The New York Times published a very interesting analysis titled “How Generative A.I. Complements the MAGA Style,” by Dan Brooks.5 The starting point of the analysis was the infamous “Gaza Riviera” video generated by AI and shared by Trump on the platform Truth Social.
The article analyzes a specific aesthetic of this kind of AI production, as well as a distinct new kind of irony (computer-generated irony) that it both uses and produces. The two—the aesthetic (or visual style) and the new irony—are, of course, closely connected.
Regarding the visual style, what stands out in the Gaza Riviera video are: “high-contrast textures, perceptibly diffuse lighting, forced-perspective shots in which people walk down city streets or through arched openings. It’s not what dreams look like so much as a visual rendering of a dream’s description, complete with mild failures of object permanence and the sense that we have seen it all before, although it didn’t look like that.”
This is a very perceptive remark: not what dreams look like so much as a visual rendering of a dream’s description. We could say, perhaps, that it is like a dream described, for example, to our analyst and then “revisualized,” turned into images based on this description. The “language of the unconscious” is turned into an image, or images. Freud was very adamant about the fact that the visual material of the dream needs to be read, or spoken out loud, taken as a rebus or associative puzzle in which images are mostly used for their sounds, including homonyms (for example, a cat and a comb can be spelled out as “catacomb”). Images do not necessarily represent things of which they are images. This, after all, is precisely what was in the background of Lacan’s thesis that “the unconscious is structured like a language.” So, can we not say that translating these linguistic sounds back into images locks up the unconscious thoughts at work in them? It prevents these thoughts from resonating and prevents our access to them but, at the same time, in no way eliminates them. It translates the narration of a dream back into a dream. The unconscious closes upon itself. Any dimension of the Real is lost.
As for the characteristics of this irony, Brooks also makes some very interesting remarks:
“It is not the stable irony of a Jonathan Swift or a Stephen Colbert, in which the audience can rely on the ironist to say the opposite of what he means. Instead, it is an unstable irony that leaves its real meaning ambiguous, or at least plausibly deniable. President Trump himself popularized this approach by ‘telling it like it is’ in a way that consistently disregards precision, if not accuracy, speaking in a hyperbolic style that his followers understand to be not literal but also gospel truth. The Trump Gaza video is ironic in this slippery sense of the word. It’s the irony of saying more than you mean (literal golden idol of Trump), or saying what you mean in a way no one could call serious (the twice-stereotyped belly dancers), or calling attention to your leader’s weak points as a gesture of unconditional loyalty (gold-leaf everything).
This is the irony that means figuratively the same thing it says literally, but in some different way that is never explained — the irony of the man who calls his wife fat and then complains she can’t take a joke. Solo Avital and Ariel Vromen, the Los Angeles-based Israeli producers who generated Trump Gaza, neatly captured this rhetorical position when they told NBC that their video was satire but also not necessarily critical of Trump’s proposal. In other words, unstable irony has given them a way to agree with the president even though they know he is wrong.”
This last phrasing is quite crucial, and it aligns very well with what I have written elsewhere about the mechanism and dynamics described by the psychoanalytic notion of “disavowal”6—which functions through our explicitly acknowledging something (“I know very well that he is wrong…”), while simultaneously demonstrating belief in the opposite (“but still I agree with him”). It seems that this “unstable irony” is, in fact, closely related to the notion and mechanism of disavowal. In relation to AI, we could even speak of a “machinic disavowal” or perhaps a “mechanically induced disavowal”—but also of a disavowal that is mechanically supported and perpetuated.
Brooks concludes that:
“Ethnically cleansing Gaza in order to develop it as resort property may be the dumbest and most venal idea Trump has ever had. That’s the point. It’s not that the denizens of the MAGA internet fail to realize such an idea is bad; it’s that they’re keenly aware that other people think they don’t realize it’s bad, so they play into that perception in order to become knowing. It’s punk rock, kitsch, trolling: the art of making something so stupid that other members of your subculture experience it as smart. If it seems calculated to alienate people who don’t already agree with it, that’s because one of its functions is to emphasize that their support is no longer necessary.
In these early days of Trump’s second term, the basic rhetorical strategy of trolling — not trying to persuade so much as trying to make what you say the subject of the biggest possible argument — seems to have escaped the internet and infected areas of life previously regarded as more important.”
All of this is very perceptive and very true, but I believe we need to add another layer to what this practice produces—one that also relates to the mechanism of disavowal: knowing that something is stupid or wrong, and yet nevertheless saying it, disseminating it, broadcasting it (as “viral” videos or statements).
A further, supplementary effect of this kind of (AI-generated) irony is that it manages to familiarize us with the “dumbest” idea by making it circulate virally. The idea is out there—it’s stupid, but clearly not “unthinkable,” since someone did, in fact, come up with it, and others shared it, spread it around, were amazed or appalled by it. Nobody needs to subjectively assume or endorse the idea; it begins to function as a piece of objective reality, or as an objective piece of reality. It is out there. And this can have very powerful and direct material consequences.
It makes it possible for someone like Netanyahu to all but openly announce the ethnic cleansing of Gaza—under the name Operation Gideon’s Chariots, a massive ground offensive which would entail “the conquest of the Gaza Strip and the holding of the territories.” Or, as Minister Smotrich put it:
“Gaza will be entirely destroyed, civilians will be sent to … the south to a humanitarian zone …, and from there they will start to leave in great numbers to third countries.”7
Recently, it has been reported, almost matter-of-factly, that “a group of far-right Israeli politicians and settlers met in parliament this week to discuss a plan to displace Palestinians from Gaza, annex the territory, and turn it into a hi-tech, luxury resort city for Israelis. The scheme, titled ‘The master plan for settlement in the Gaza Strip’, envisions the construction of 850,000 housing units, the construction of hi-tech ‘smart cities’ that trade cryptocurrency, and a metro system that runs across the territory. It took its inspiration from an idea shared by U.S. President Donald Trump in February, when he pledged to turn Gaza into the "Riviera of the Middle East".8
Coming on top of 60,000 (and counting) people killed and starved to death in Gaza, this plan can now be openly announced and discussed (in parliament!)—and this goes unsanctioned. This is not only because of the support given to Israel by the U.S. and other international actors, but also because, in a way, we are all already familiar with the idea—we "know all about it." It has been circulating for a while (for example, in the form of the Gaza-Riviera video and its "unstable irony"), so there is no surprise (let alone shock)—nothing new, startling, or unexpected. It is almost as if it has already happened.
It functions like a déjà vu.
Freud wrote very interesting things about the phenomenon of déjà vu, or "fausse reconnaissance," in analytic treatment, recognizing it as one of the prominent defense formations—that is, mechanisms which protect us from a potentially traumatic, disruptive encounter that would otherwise force us to genuinely acknowledge something or to shift our position. He noted how:
“It not infrequently happens in the course of an analytic treatment that the patient, after reporting some fact that he has remembered, will go on to say: ‘But I’ve told you that already’—while the analyst himself feels sure that this is the first time he has heard the story.”9
In other words, something that has just emerged—something traumatic or disruptive—is immediately intercepted (and de-realized) by a precipitate recognition of it as déjà vu. We are looking directly at the traumatic event (it is right there in front of our eyes, fully acknowledged), yet it cannot really get to us, affect us. It is intercepted as already well-known, and in this way, "boring," before its meaning or significance can even register.
We might say that the thing maintains this indifferent character by means of being cut off from its possible articulation as presence in reality: this articulation appears for the first time already as a memory, something that we vaguely recognize. As if the genocidal ethnic cleansing of Gaza had already been accomplished.
This kind of AI-generated irony functions as an unconscious without a subject, carrying out a significant labor on which ruthless political powers can cash in. It is a work that pulls us all into the orbit of a generated déjà vu, where everything is possible, but nothing can happen anymore.
We are all—though Palestinians on a yet fully different level—now learning the hard way that “if you are trapped in the dream (or fantasy) of the Other, you’re fucked.”
As suggested, in our conversation, by my colegue Tadej Troha.
“The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent. When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.” (https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html)
For more: https://lnkd.in/grb9i8DX
In the documentary L’Abécédaire de Gilles Deleuze, directed by Pierre-André Boutang, section “R for Resistance”.
https://www.nytimes.com/2025/03/13/magazine/generative-ai-maga-style.html?smid=nytcore-ios-share&referringSource=articleShare. I’m very thankful to Eric Santner for bringing this article to my attention, and for pointing out its relation to my book Disavowal.
A. Zupančič, Disavowal, Polity press, Cambridge 2024.
Repported in The Guardian: https://www.theguardian.com/world/2025/may/06/hamas-israel-hunger-war-in-gaza
https://www.theguardian.com/world/2025/jul/24/far-right-israeli-politicians-and-settlers-discuss-luxury-gaza-riviera-plan
Sigmund Freud, “Fausse reconaissance (déjà raconté) in psycho-analytic treatment”, The Standard Edition of the Complete Psychological Works of Sigmund Freud, vol. XIII (London: Hogarth Press 2001), p. 201.



Fascinating framing. What strikes me is the contrast with hallucinations in tragedy. When Macbeth sees the dagger, or Lady Macbeth the bloodstain, the hallucination becomes a site of subjectivity, of relation to lack, to desire, or the impossible. AI hallucinations, by contrast, seem trapped in circulation without rupture.
I read the comment that ai lacks an other and therefore lacks self reflection. But that seems to be a programming error which could be overcome.
The only drawback would be the costs in terms of processing power , which only a brain can readily reproduce.
Which suggests growing a supersized human brain in a lab might be the only solution if we are to avoid the excessive energy costs.
Personally I am sceptical of AI whose costs seem to exceed the benefits. For who is the obvious response. Plainly
Trillions are being invested on the basis that profits are expected. The only application seems to be in population surveillance and policing.