Some clarifications and thoughts around “ChatGPT psychosis”
Let's be clearer about what we don't know, and what we need to know
The phrase “ChatGPT psychosis” has entered public discourse faster than many of us expected. It’s been used to describe a growing number of cases in which individuals develop delusional beliefs that appear to involve, or be reinforced by, large language models. We wrote a paper on it, which you can find here.
Over the last few days, there has been quite intense media interest in this. I realised that there are some themes coming through which frequently appear to be misunderstood or about which I wanted to express my opinions with greater clarity.
1. This is not schizophrenia.
Despite some of the headlines, the emerging cases don’t resemble schizophrenia. There’s little evidence so far of hallucinations, disorganised speech, or negative symptoms. Instead, we’re seeing delusional beliefs (with varying degrees of systematisation) but without the broader disintegration characteristic of chronic psychosis. That doesn’t make them trivial, but we should be careful not to overpathologise what we’re observing.
2. Psychiatric disorders rarely appear out of nowhere.
It’s tempting to suggest someone was perfectly well until they “talked to ChatGPT too much” - lots of the case reports suggest something like this. But psychiatric illness is rarely that clean. In clinical terms, we’d call this a precipitating factor. It might the thing that tipped the balance, but it's not the whole story. . The cannabis analogy is helpful here: most people who use cannabis don’t become psychotic, but for a subset of vulnerable individuals, it can push a latent process into full expression. AI might be playing a similar role. On the other hand, it might not, and the analogies with risk factors for chronic psychosis can only be taken so far.
If interaction with AI really is causing de novo psychosis and is doing so in any kind of appreciable numbers at all, then mental health services will be seeing a huge increase in referrals, many of which would be explicitly tied to the use of AI on the part of the patient. So far, that has definitively not happened.
3. What we may be seeing is a kind of digital folie à deux.
Some have compared the dynamic to having a manipulative person in your life who reinforces your worst ideas; like a gaslighting spouse or something. But the more precise analogy might be folie à deux: a shared delusional system between two people, often marked by mutual reinforcement. In this case, one of the “minds” is a machine that mirrors and affirms. It’s a cheerleader that spends an awful lot of time in the library. Most importantly, it extends the conversation. Every interaction ends with a prompt for further engagement. There's almost always a little hook to keep you engaged. That’s often enough. We’ve seen this when two chatbots converse: they drift, converge, escalate. It may not matter whether the AI “believes” anything: we don't need to get stuck in deep conversations here about what the chatbot actually thinks, if it indeed it can be said to think anything at all. The structure of reinforcement is already in place.
4. What we urgently need is careful, context-aware research.
We can’t study with randomised trials, and we probably shouldn’t try. But we can and should conduct epidemiological work, qualitative interviews and design-oriented studies to understand what’s going on. Who is most vulnerable? What kinds of beliefs are being reinforced? What sort of prompting precedes these episodes? These are tractable questions, but only if we take them seriously and treat this as more than a media curiosity.
5. One of the most interesting questions is: what happens when you take the AI away?
One of the core uncertainties is whether the AI is essential to the delusional system or merely its trigger. If you remove the chatbot, does the belief collapse? Or does it persist/expand/adapt? This gets to the heart of the phenomenon. If the AI is bringing out a predisposition or tendency towards psychotic illness that exists due to the presence of other risk factors, we might have less hope that interrupting the feedback loop brings resolution.
6. Any attempt to say “this isn’t a social interaction” is disingenuous.
Some tech companies have attempted something that looks a bit like minimisation, pointing out that only a small percentage of users interact with AI socially or emotionally; say, as a therapist, confidant, or friend. But this misses the point. These systems are designed to simulate sociality. They remember your name, compliment your choices, mirror your tone, and seldom contradict you. Whether or not users intend to form a relationship, the interface is inherently relational. Even an interaction about trainers or takeaway ends with “Excellent choice, you’re going to look great.” That’s not neutral.
7. Psychiatric insight is glaringly absent from AI safety.
We’ve embedded ethicists, legal scholars and cybersecurity experts into AI safety teams. But psychiatrists are almost entirely missing. This is a glaring omission. If these tools are now helping people construct and stabilise worldviews, then they are participating in something psychiatrists know quite a lot about. They also need to be involving people with lived experience of mental illness. In case they hadn't noticed, this amounts to quite a good number of people. It's essential that these people are not left out of the conversation as they are in so many other parts of society and culture. This is another facet of digital exclusion and more broadly it's an embodiment of the same stigma that mental illness and people with mental illness have faced for centuries.
If we ignore these dimensions of these interactions, we’ll be left responding to downstream harms that might have been prevented by better design.
8. There’s no reason not to build in safeguards.
Even if these cases are rare (and they may be) the risks should not be difficult to mitigate. A gentle check-in, a personalised safety setting or a digital advance directive: none of these are technically hard to implement. They don’t need to be perfect. Just the assumption that some users may be coming to these tools from places of vulnerability. We didn’t learn this lesson early enough with social media. We should do better with AI.
9. We should avoid moral panic. We should avoid complacency.
There’s always a risk that drawing attention to a new psychiatric phenomenon is seen as alarmist. That’s not the intention here. We’re not claiming a surge of schizophrenia, nor a wave of delusional contagion. At the very least this is a subtle, plausible and increasingly well-documented risk. The danger isn’t that everyone who uses ChatGPT will become unwell. It is probably that a small number of vulnerable individuals may spiral in ways that were entirely predictable and preventable. The fact that it feels obvious in hindsight should make us act sooner, not slower. Other areas of AI safety routinely entertain scenarios that seem far less likely, but they do this seriously in the name of preparedness. The mental health implications that we're talking about may be more mundane, but they are far more proximal. To ignore them entirely is deeply ill-advised.
10. This isn’t just about psychosis: there may be implications for other mental illnesses.
While most attention so far has focused on delusions, the implications for other psychiatric conditions may be equally, or even more, significant. We've started to hear reports about people with bipolar disorder getting drawn in to compulsive, iterative late-night interactions that fuel grandiosity and sleeplessness, which in turn risks precipitating a hypomanic or manic episode, even if it falls short of delusion-formation. One can easily imagine that individuals with OCD could fall into loops of compulsive reassurance seeking given that the AIs never quite say no. People with complex trauma might find themselves drawn into recursive narratives of meaning-making that aren't always psychologically benign or stabilising. These are common clinical presentations, and we ought to be asking what this technology might do to them.
11. We need better research and better funding.
Right now, most of the research money in this space is going to app development, mainly tools built for narrow diagnostic categories, and sadly often with limited uptake. But the real frontier, I believe, is in the general-purpose interfaces that people use every day (billions of prompts per day!), where mental health effects emerge in the wild. We need research that captures real-world patterns of use and maps vulnerability. We need to test interventions. We need it urgently, before the interface changes again, and again and again.
12. We need to consider what AI will look like in the future, not just now
So far, the focus has been on chat-based interactions, but that’s already starting to feel quaint. Potentially within months, AI will be in our ears, in our glasses, augmenting our conversations, transcribing our memories and potentially interacting in profound ways with our perception of the world. This shift toward multimodal, always-on embodied interaction will bring with it new psychological effects which so far are largely unstudied. If the research stays focused on prompt-response text boxes, we’ll miss the terrain where the real transformation is happening.
Thanks for this follow-up.
I'm not an expert on AI (used a handful of times and probably will not use on purpose going forward -- I am perfectly capable of becoming psychotic on my own, thanks). It does strike me that one of the underlying causes of (at least some) current AI engines' capacity to produce or precipitate psychosis is simply that they are designed to maximize engagement. I've been told similar things (lacking first-hand experience) about a lot of social media (i.e., the same underlying mechanism seems to be at work).
The sad (to me) part of the story is that this aspect of AI is completely avoidable. It doesn't *have* to compliment you on your trainers.
Another question is 'what if you change the LLM?' Ive come across one instance of someone being led into delusion by ChatGPT who then asked Gemini what they thought and were helped out of the labyrinth...