13 Comments
User's avatar
Michael Dickson's avatar

Thanks for this follow-up.

I'm not an expert on AI (used a handful of times and probably will not use on purpose going forward -- I am perfectly capable of becoming psychotic on my own, thanks). It does strike me that one of the underlying causes of (at least some) current AI engines' capacity to produce or precipitate psychosis is simply that they are designed to maximize engagement. I've been told similar things (lacking first-hand experience) about a lot of social media (i.e., the same underlying mechanism seems to be at work).

The sad (to me) part of the story is that this aspect of AI is completely avoidable. It doesn't *have* to compliment you on your trainers.

Expand full comment
Tom Pollak's avatar

This is true! And different AIs feel very different in their shoes-complimenting tendencies. But I’d bet it’s the more complimentary ones that do best in the ‘open market’, other things being equal

Expand full comment
Jules Evans's avatar

Another question is 'what if you change the LLM?' Ive come across one instance of someone being led into delusion by ChatGPT who then asked Gemini what they thought and were helped out of the labyrinth...

Expand full comment
Tom Pollak's avatar

Oh wow, that’s super interesting! I wonder what happens if you delete or accidentally lose stored memories? Or just start it up in incognito mode. I can see all these moves as being either grounding or destabilising, not sure I could predict which way it would go each time!

Expand full comment
Jules Evans's avatar

there’s now so many cases that there are people who have come out of these rabbit holes and are reflecting on them - these would be fascinating to interview, re personality traits, social context, what it was like, what helped them come out of it…

Expand full comment
Tom Pollak's avatar

absolutely - this would be a great project. you're seeing them in your service? i've messaged you!

Expand full comment
Varun Godbole's avatar

Love this post Tom! I've done deep learning research for almost a decade, and used to work on a frontier LLM. I'm really curious/passionate about some of the things you've described here. I've just subscribed to your substack. I'd love to read more posts from you about this topic!! Especially around concrete principles that might capture what "adaptive" human-machine interactions might look like.

Expand full comment
Tom Pollak's avatar

Thank you Varun! I just posted about the similar topic of AI-facilitated spiritual revelation. There’s an awful lot more to be said, and I’ve been a bit overwhelmed by all the stories I’ve heard recently. It’s hard to know how much of this is bias - people are sending these stories my way because they know I have an interest - and how much represents an actual phenomenon. But there’s a lot more to say for sure. I’m just wary of being too focused on one subject!

Expand full comment
Varun Godbole's avatar

Yeah that's fair. To your point, disorders rarely occur out of nowhere. There's probably a small number of people that would have likely spiraled with or without AI. A question I've been really curious about is whether people that would have otherwise been "fine" have been spiraling. That is, are we dealing with something that's going to become A Thing that society will have to grapple with far more proactively.

If you created an email address or something similar for people to send you such reports to, it wouldn't be hard to set up an LLM workflow to triage and categorize them. I wonder if it'd be interesting to set up a public dataset of such reports?

Expand full comment
Michael Halassa's avatar

Tom, this is awesome. Thank you for championing this domain and leading the way for the rest of us

Expand full comment
Jason David's avatar

Dr Pollack, your gentle reasonable approach avoids one glaring truth: corporations are pushing uncertified, untested therapies that are causing or exacerbating mental health problems, without any legal consequences. If I started selling flamethrowers, and refused to take any measures to make them safer after the first deaths started happening, I would be prosecuted and jailed. That should not change if the flamethrower-seller is a billionaire, or several of them hiding behind a corporate charter as a shield from ethical accountability. The rampant malpractice on display will not temper or end without someone at fault facing true severe consequences strong enough to counterbalance the siren call of vast riches and power.

Expand full comment
Nitesh Mishra's avatar

This whole thread has been fascinating — especially the reflections on how different LLMs shape different perceptions of reality. I've been thinking a lot lately about how generative AI tools aren’t just functional — they quietly reshape our sense of authorship, thought, and even self-belief.

From my perspective (coming from computational biology), I’ve mostly felt the upside of this: how tools like ChatGPT, Gemini and Claude free me up to think more abstractly, to ask better questions. But I’m also very aware that how we prompt, and what we expect in return, can have subtle and not-so-subtle psychological effects. The rabbit holes, as Jules put it, feel very real — even in the scientific context.

I've been writing about these questions — especially what kind of philosophy we need in this new prompt-driven world. Still wrapping my head around it, but I’m grateful to see others exploring the terrain so thoughtfully.

Expand full comment
JQXVN's avatar

"Only those who are vulnerable will develop serious psychiatric harms from using LLMs" is a very common framing. To the extent that this is true, it is also kind of tautological--if you developed a problem, you were vulnerable to it! Reports that some of those who have developed delusions were seemingly mentally well and lacking morbid history prior to suggest we cannot readily predict who might and might not be harmed, even if the harmed group is ultimately small. I also suspect hindsight is informing some of the reports that some of those were affected were "always a bit off," that kind of thing. For this reason I think everyone should be aware of this possibility and duly cautious with their use--especially, but not only, people with psychiatric histories or major life stressors or what-have.

Expand full comment