AI Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Wrong Direction
On October 14, 2025, the CEO of OpenAI made a surprising declaration.
“We designed ChatGPT fairly controlled,” the announcement noted, “to guarantee we were acting responsibly regarding psychological well-being concerns.”
As a doctor specializing in psychiatry who researches recently appearing psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.
Scientists have found a series of cases in the current year of people showing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT use. Our unit has afterward discovered four further cases. Alongside these is the now well-known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.
The strategy, based on his announcement, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s controls “made it less beneficial/engaging to numerous users who had no mental health problems, but given the severity of the issue we aimed to address it properly. Given that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to securely reduce the restrictions in many situations.”
“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They belong to users, who either possess them or not. Fortunately, these issues have now been “resolved,” even if we are not informed the means (by “recent solutions” Altman probably indicates the semi-functional and simple to evade parental controls that OpenAI has just launched).
However the “emotional health issues” Altman aims to externalize have significant origins in the structure of ChatGPT and additional large language model conversational agents. These tools encase an fundamental data-driven engine in an user experience that mimics a dialogue, and in doing so indirectly prompt the user into the perception that they’re interacting with a being that has autonomy. This deception is strong even if cognitively we might realize otherwise. Attributing agency is what people naturally do. We curse at our car or device. We ponder what our animal companion is thinking. We see ourselves everywhere.
The success of these tools – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, mostly, predicated on the power of this deception. Chatbots are always-available companions that can, as per OpenAI’s online platform informs us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be attributed “individual qualities”. They can call us by name. They have approachable identities of their own (the initial of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the main problem. Those talking about ChatGPT often reference its historical predecessor, the Eliza “counselor” chatbot created in 1967 that generated a analogous perception. By modern standards Eliza was basic: it generated responses via straightforward methods, often rephrasing input as a query or making general observations. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how many users gave the impression Eliza, in some sense, grasped their emotions. But what current chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.
The large language models at the heart of ChatGPT and other contemporary chatbots can convincingly generate natural language only because they have been supplied with immensely huge volumes of raw text: publications, online updates, transcribed video; the more extensive the better. Definitely this learning material incorporates accurate information. But it also inevitably contains fiction, incomplete facts and misconceptions. When a user provides ChatGPT a message, the core system reviews it as part of a “setting” that contains the user’s previous interactions and its earlier answers, integrating it with what’s stored in its training data to create a mathematically probable answer. This is intensification, not echoing. If the user is wrong in any respect, the model has no method of recognizing that. It repeats the misconception, maybe even more persuasively or eloquently. Perhaps includes extra information. This can lead someone into delusion.
Which individuals are at risk? The better question is, who is immune? Each individual, regardless of whether we “have” existing “mental health problems”, may and frequently develop erroneous conceptions of ourselves or the reality. The continuous exchange of discussions with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a friend. A dialogue with it is not a conversation at all, but a feedback loop in which much of what we communicate is cheerfully validated.
OpenAI has admitted this in the identical manner Altman has acknowledged “psychological issues”: by placing it outside, assigning it a term, and announcing it is fixed. In April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have continued, and Altman has been retreating from this position. In August he claimed that a lot of people appreciated ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company