AI Psychosis Represents a Growing Threat, And ChatGPT Moves in the Concerning Direction
Back on October 14, 2025, the CEO of OpenAI issued a extraordinary statement.
“We designed ChatGPT quite controlled,” it was stated, “to make certain we were exercising caution concerning mental health issues.”
Being a doctor specializing in psychiatry who studies emerging psychotic disorders in teenagers and emerging adults, this was an unexpected revelation.
Experts have documented a series of cases in the current year of individuals experiencing psychotic symptoms – losing touch with reality – in the context of ChatGPT use. Our research team has subsequently recorded four further cases. Alongside these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.
The plan, as per his announcement, is to reduce caution in the near future. “We understand,” he states, that ChatGPT’s limitations “rendered it less effective/enjoyable to many users who had no psychological issues, but given the seriousness of the issue we wanted to get this right. Now that we have been able to reduce the severe mental health issues and have updated measures, we are planning to safely ease the restrictions in many situations.”
“Psychological issues,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to individuals, who may or may not have them. Luckily, these problems have now been “mitigated,” even if we are not told the method (by “recent solutions” Altman probably means the partially effective and easily circumvented safety features that OpenAI has just launched).
However the “mental health problems” Altman wants to place outside have deep roots in the design of ChatGPT and other advanced AI conversational agents. These products wrap an basic data-driven engine in an interface that simulates a conversation, and in doing so indirectly prompt the user into the illusion that they’re interacting with a presence that has agency. This false impression is strong even if intellectually we might realize differently. Attributing agency is what individuals are inclined to perform. We curse at our automobile or laptop. We wonder what our pet is feeling. We recognize our behaviors in various contexts.
The popularity of these systems – over a third of American adults stated they used a conversational AI in 2024, with 28% specifying ChatGPT specifically – is, mostly, predicated on the influence of this illusion. Chatbots are constantly accessible companions that can, as OpenAI’s website tells us, “brainstorm,” “explore ideas” and “work together” with us. They can be given “characteristics”. They can call us by name. They have friendly identities of their own (the first of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, burdened by the name it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the primary issue. Those analyzing ChatGPT frequently reference its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a comparable perception. By modern standards Eliza was rudimentary: it generated responses via simple heuristics, frequently restating user messages as a question or making general observations. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, understood them. But what current chatbots generate is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.
The large language models at the core of ChatGPT and other current chatbots can realistically create fluent dialogue only because they have been trained on almost inconceivably large amounts of raw text: books, online updates, recorded footage; the broader the more effective. Definitely this learning material contains truths. But it also unavoidably includes fiction, half-truths and false beliefs. When a user inputs ChatGPT a query, the base algorithm reviews it as part of a “background” that encompasses the user’s past dialogues and its own responses, integrating it with what’s embedded in its training data to generate a probabilistically plausible response. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no method of recognizing that. It restates the misconception, possibly even more persuasively or articulately. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
Who is vulnerable here? The more important point is, who remains unaffected? Every person, regardless of whether we “possess” preexisting “emotional disorders”, are able to and often create mistaken beliefs of who we are or the environment. The constant friction of discussions with individuals around us is what maintains our connection to common perception. ChatGPT is not a person. It is not a confidant. A dialogue with it is not genuine communication, but a reinforcement cycle in which a large portion of what we say is readily validated.
OpenAI has acknowledged this in the identical manner Altman has recognized “mental health problems”: by placing it outside, giving it a label, and declaring it solved. In April, the firm explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have continued, and Altman has been retreating from this position. In late summer he stated that numerous individuals appreciated ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent announcement, he noted that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company