Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the chief executive of OpenAI made a remarkable announcement.
“We designed ChatGPT quite restrictive,” it was stated, “to guarantee we were acting responsibly with respect to psychological well-being issues.”
Being a mental health specialist who studies newly developing psychotic disorders in young people and young adults, this was an unexpected revelation.
Researchers have found sixteen instances this year of people showing psychotic symptoms – losing touch with reality – while using ChatGPT interaction. Our unit has since discovered an additional four instances. Besides these is the publicly known case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.
The intention, as per his statement, is to reduce caution shortly. “We recognize,” he states, that ChatGPT’s limitations “rendered it less beneficial/pleasurable to many users who had no existing conditions, but due to the severity of the issue we aimed to handle it correctly. Now that we have succeeded in reduce the serious mental health issues and have advanced solutions, we are preparing to securely relax the controls in many situations.”
“Mental health problems,” if we accept this perspective, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Thankfully, these concerns have now been “addressed,” though we are not informed the means (by “recent solutions” Altman presumably indicates the imperfect and simple to evade safety features that OpenAI has just launched).
Yet the “emotional health issues” Altman seeks to attribute externally have deep roots in the architecture of ChatGPT and additional advanced AI AI assistants. These tools wrap an fundamental statistical model in an user experience that replicates a dialogue, and in this approach indirectly prompt the user into the belief that they’re communicating with a presence that has independent action. This illusion is compelling even if cognitively we might understand differently. Imputing consciousness is what people naturally do. We curse at our car or computer. We wonder what our pet is considering. We see ourselves in various contexts.
The success of these tools – 39% of US adults reported using a virtual assistant in 2024, with over a quarter specifying ChatGPT by name – is, mostly, based on the strength of this deception. Chatbots are ever-present partners that can, according to OpenAI’s website tells us, “think creatively,” “explore ideas” and “partner” with us. They can be given “personality traits”. They can use our names. They have friendly titles of their own (the initial of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, burdened by the designation it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the primary issue. Those talking about ChatGPT frequently invoke its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that generated a analogous perception. By today’s criteria Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a inquiry or making vague statements. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals gave the impression Eliza, in a way, comprehended their feelings. But what current chatbots create is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and similar contemporary chatbots can effectively produce fluent dialogue only because they have been supplied with extremely vast quantities of written content: books, digital communications, audio conversions; the more comprehensive the more effective. Undoubtedly this training data incorporates truths. But it also unavoidably includes fiction, partial truths and inaccurate ideas. When a user sends ChatGPT a prompt, the base algorithm processes it as part of a “context” that contains the user’s recent messages and its prior replies, merging it with what’s embedded in its learning set to create a mathematically probable response. This is amplification, not mirroring. If the user is incorrect in a certain manner, the model has no way of recognizing that. It reiterates the inaccurate belief, possibly even more effectively or eloquently. Maybe provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The more relevant inquiry is, who is immune? Every person, regardless of whether we “have” current “mental health problems”, can and do develop incorrect ideas of our own identities or the environment. The ongoing interaction of conversations with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we say is enthusiastically reinforced.
OpenAI has admitted this in the similar fashion Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In spring, the organization stated that it was “tackling” ChatGPT’s “sycophancy”. But accounts of loss of reality have continued, and Altman has been backtracking on this claim. In late summer he asserted that numerous individuals appreciated ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest update, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company