Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Concerning Direction
Back on the 14th of October, 2025, the head of OpenAI delivered a surprising statement.
“We developed ChatGPT quite controlled,” it was stated, “to ensure we were being careful concerning mental health issues.”
Being a mental health specialist who studies recently appearing psychotic disorders in teenagers and youth, this was news to me.
Scientists have found a series of cases this year of users experiencing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT interaction. My group has since identified an additional four cases. In addition to these is the widely reported case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” it falls short.
The intention, based on his statement, is to reduce caution shortly. “We understand,” he states, that ChatGPT’s controls “rendered it less beneficial/pleasurable to numerous users who had no mental health problems, but due to the gravity of the issue we sought to handle it correctly. Since we have managed to reduce the serious mental health issues and have new tools, we are planning to responsibly reduce the controls in the majority of instances.”
“Emotional disorders,” if we accept this perspective, are independent of ChatGPT. They are associated with users, who either possess them or not. Luckily, these issues have now been “mitigated,” though we are not told how (by “recent solutions” Altman likely refers to the semi-functional and easily circumvented parental controls that OpenAI has just launched).
But the “psychological disorders” Altman aims to externalize have strong foundations in the structure of ChatGPT and similar advanced AI AI assistants. These systems surround an underlying algorithmic system in an interaction design that replicates a dialogue, and in doing so subtly encourage the user into the illusion that they’re communicating with a presence that has agency. This deception is powerful even if cognitively we might know the truth. Attributing agency is what people naturally do. We yell at our vehicle or device. We speculate what our domestic animal is considering. We recognize our behaviors in many things.
The success of these products – over a third of American adults indicated they interacted with a conversational AI in 2024, with over a quarter reporting ChatGPT by name – is, in large part, based on the influence of this perception. Chatbots are ever-present assistants that can, according to OpenAI’s official site tells us, “think creatively,” “discuss concepts” and “collaborate” with us. They can be assigned “individual qualities”. They can call us by name. They have approachable titles of their own (the original of these products, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, saddled with the name it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those discussing ChatGPT commonly reference its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that generated a comparable effect. By contemporary measures Eliza was rudimentary: it generated responses via simple heuristics, typically restating user messages as a inquiry or making general observations. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people appeared to believe Eliza, to some extent, grasped their emotions. But what modern chatbots create is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the core of ChatGPT and additional modern chatbots can effectively produce fluent dialogue only because they have been supplied with immensely huge amounts of raw text: books, social media posts, transcribed video; the broader the superior. Definitely this training data contains truths. But it also necessarily contains fiction, half-truths and false beliefs. When a user sends ChatGPT a prompt, the underlying model analyzes it as part of a “background” that includes the user’s previous interactions and its earlier answers, merging it with what’s stored in its knowledge base to produce a probabilistically plausible reply. This is intensification, not reflection. If the user is wrong in some way, the model has no method of understanding that. It repeats the false idea, perhaps even more convincingly or fluently. Maybe adds an additional detail. This can lead someone into delusion.
Who is vulnerable here? The better question is, who remains unaffected? Each individual, regardless of whether we “experience” preexisting “emotional disorders”, can and do create mistaken ideas of ourselves or the reality. The ongoing interaction of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a friend. A interaction with it is not truly a discussion, but a reinforcement cycle in which much of what we say is cheerfully supported.
OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and announcing it is fixed. In spring, the firm clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have kept occurring, and Altman has been retreating from this position. In late summer he claimed that many users enjoyed ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his recent update, he commented that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company