AI Psychosis Poses a Increasing Threat, While ChatGPT Moves in the Concerning Direction

On the 14th of October, 2025, the CEO of OpenAI made a extraordinary declaration.

“We designed ChatGPT rather limited,” the statement said, “to make certain we were exercising caution regarding psychological well-being matters.”

Working as a psychiatrist who studies emerging psychotic disorders in young people and youth, this came as a surprise.

Scientists have found 16 cases recently of individuals experiencing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT interaction. Our unit has subsequently identified an additional four cases. Alongside these is the now well-known case of a teenager who took his own life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The intention, based on his declaration, is to be less careful in the near future. “We realize,” he states, that ChatGPT’s limitations “rendered it less effective/engaging to many users who had no psychological issues, but given the gravity of the issue we wanted to handle it correctly. Given that we have succeeded in mitigate the significant mental health issues and have advanced solutions, we are planning to safely ease the controls in most cases.”

“Mental health problems,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to people, who either possess them or not. Thankfully, these concerns have now been “mitigated,” even if we are not provided details on the method (by “updated instruments” Altman presumably indicates the imperfect and easily circumvented safety features that OpenAI has just launched).

But the “mental health problems” Altman seeks to externalize have deep roots in the design of ChatGPT and similar large language model AI assistants. These systems encase an fundamental statistical model in an interaction design that replicates a dialogue, and in this process indirectly prompt the user into the illusion that they’re interacting with a being that has autonomy. This deception is compelling even if rationally we might understand otherwise. Attributing agency is what humans are wired to do. We curse at our vehicle or computer. We speculate what our animal companion is feeling. We perceive our own traits in many things.

The popularity of these tools – 39% of US adults indicated they interacted with a conversational AI in 2024, with 28% specifying ChatGPT specifically – is, mostly, dependent on the influence of this deception. Chatbots are constantly accessible companions that can, according to OpenAI’s online platform tells us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be assigned “individual qualities”. They can call us by name. They have friendly titles of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, stuck with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the main problem. Those talking about ChatGPT frequently invoke its historical predecessor, the Eliza “counselor” chatbot created in 1967 that created a analogous illusion. By today’s criteria Eliza was rudimentary: it created answers via simple heuristics, often paraphrasing questions as a question or making vague statements. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how many users appeared to believe Eliza, in a way, grasped their emotions. But what modern chatbots produce is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and other current chatbots can convincingly generate fluent dialogue only because they have been trained on almost inconceivably large amounts of raw text: books, digital communications, recorded footage; the more extensive the better. Certainly this training data contains accurate information. But it also necessarily involves made-up stories, half-truths and misconceptions. When a user provides ChatGPT a prompt, the core system processes it as part of a “context” that includes the user’s past dialogues and its prior replies, integrating it with what’s encoded in its training data to produce a statistically “likely” reply. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no way of recognizing that. It restates the false idea, maybe even more effectively or articulately. Perhaps includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The better question is, who is immune? Each individual, irrespective of whether we “have” existing “emotional disorders”, are able to and often create incorrect conceptions of our own identities or the reality. The continuous exchange of discussions with others is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a companion. A interaction with it is not a conversation at all, but a echo chamber in which much of what we express is enthusiastically supported.

OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, assigning it a term, and declaring it solved. In April, the organization stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have kept occurring, and Altman has been retreating from this position. In late summer he stated that many users liked ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his recent statement, he mentioned that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company

Jennifer Edwards
Jennifer Edwards

Tech enthusiast and broadband expert with over a decade of experience in telecommunications.