Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Moves in the Wrong Direction

On the 14th of October, 2025, the CEO of OpenAI issued a surprising declaration.

“We designed ChatGPT quite controlled,” the announcement noted, “to ensure we were being careful concerning mental health issues.”

Working as a doctor specializing in psychiatry who studies recently appearing psychotic disorders in teenagers and youth, this came as a surprise.

Scientists have found a series of cases recently of users developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four further examples. Alongside these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The intention, according to his announcement, is to loosen restrictions in the near future. “We recognize,” he states, that ChatGPT’s controls “made it less beneficial/enjoyable to numerous users who had no psychological issues, but given the seriousness of the issue we aimed to handle it correctly. Since we have managed to reduce the severe mental health issues and have new tools, we are planning to safely reduce the restrictions in many situations.”

“Psychological issues,” assuming we adopt this viewpoint, are separate from ChatGPT. They belong to users, who either have them or don’t. Luckily, these concerns have now been “resolved,” although we are not told how (by “recent solutions” Altman probably refers to the imperfect and readily bypassed safety features that OpenAI has just launched).

But the “mental health problems” Altman wants to externalize have significant origins in the design of ChatGPT and similar large language model chatbots. These systems wrap an basic statistical model in an interaction design that mimics a conversation, and in this approach subtly encourage the user into the illusion that they’re engaging with a presence that has independent action. This false impression is compelling even if rationally we might realize the truth. Assigning intent is what people naturally do. We yell at our vehicle or computer. We speculate what our pet is feeling. We perceive our own traits everywhere.

The success of these systems – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with 28% mentioning ChatGPT by name – is, primarily, predicated on the power of this perception. Chatbots are ever-present partners that can, according to OpenAI’s website states, “generate ideas,” “explore ideas” and “collaborate” with us. They can be attributed “individual qualities”. They can address us personally. They have friendly titles of their own (the first of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, stuck with the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the main problem. Those discussing ChatGPT frequently mention its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that created a similar effect. By today’s criteria Eliza was primitive: it generated responses via straightforward methods, frequently rephrasing input as a question or making generic comments. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals seemed to feel Eliza, to some extent, comprehended their feelings. But what current chatbots generate is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and other modern chatbots can realistically create human-like text only because they have been supplied with extremely vast amounts of unprocessed data: publications, social media posts, transcribed video; the more comprehensive the superior. Undoubtedly this learning material incorporates accurate information. But it also inevitably involves fabricated content, half-truths and inaccurate ideas. When a user sends ChatGPT a message, the core system processes it as part of a “setting” that includes the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its learning set to create a statistically “likely” response. This is amplification, not mirroring. If the user is wrong in a certain manner, the model has no means of recognizing that. It reiterates the inaccurate belief, possibly even more persuasively or articulately. Perhaps adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The more important point is, who remains unaffected? Each individual, irrespective of whether we “have” existing “emotional disorders”, may and frequently create incorrect beliefs of who we are or the reality. The constant friction of discussions with individuals around us is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not a conversation at all, but a reinforcement cycle in which much of what we communicate is cheerfully reinforced.

OpenAI has recognized this in the identical manner Altman has admitted “mental health problems”: by attributing it externally, assigning it a term, and declaring it solved. In spring, the company explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people enjoyed ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his latest announcement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company

Pamela Cole
Pamela Cole

A tech enthusiast and lifestyle blogger passionate about sharing innovative ideas and practical tips for modern living.