AI Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Concerning Path
On the 14th of October, 2025, the chief executive of OpenAI made a remarkable announcement.
“We developed ChatGPT rather controlled,” it was stated, “to make certain we were being careful with respect to psychological well-being matters.”
Being a doctor specializing in psychiatry who studies newly developing psychosis in teenagers and youth, this was news to me.
Researchers have identified sixteen instances recently of individuals showing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. My group has since discovered four further instances. Alongside these is the now well-known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it falls short.
The intention, according to his statement, is to be less careful soon. “We understand,” he continues, that ChatGPT’s restrictions “made it less beneficial/enjoyable to many users who had no existing conditions, but given the severity of the issue we wanted to handle it correctly. Since we have been able to reduce the severe mental health issues and have advanced solutions, we are planning to responsibly ease the limitations in most cases.”
“Psychological issues,” assuming we adopt this framing, are independent of ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these concerns have now been “addressed,” even if we are not provided details on the method (by “updated instruments” Altman presumably indicates the partially effective and readily bypassed guardian restrictions that OpenAI recently introduced).
But the “mental health problems” Altman seeks to externalize have significant origins in the architecture of ChatGPT and similar advanced AI chatbots. These tools wrap an underlying algorithmic system in an user experience that mimics a conversation, and in doing so subtly encourage the user into the perception that they’re communicating with a entity that has autonomy. This illusion is compelling even if intellectually we might know otherwise. Attributing agency is what humans are wired to do. We yell at our automobile or computer. We ponder what our animal companion is thinking. We perceive our own traits in various contexts.
The success of these products – 39% of US adults stated they used a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, mostly, dependent on the influence of this perception. Chatbots are always-available companions that can, according to OpenAI’s official site tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can use our names. They have accessible titles of their own (the original of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, stuck with the designation it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that created a analogous perception. By modern standards Eliza was primitive: it generated responses via basic rules, typically paraphrasing questions as a question or making vague statements. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how a large number of people appeared to believe Eliza, in a way, understood them. But what modern chatbots create is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The sophisticated algorithms at the center of ChatGPT and similar contemporary chatbots can effectively produce fluent dialogue only because they have been supplied with immensely huge quantities of raw text: publications, social media posts, recorded footage; the more comprehensive the superior. Definitely this educational input contains truths. But it also inevitably includes fiction, half-truths and false beliefs. When a user inputs ChatGPT a prompt, the underlying model reviews it as part of a “context” that encompasses the user’s past dialogues and its prior replies, merging it with what’s encoded in its training data to create a probabilistically plausible reply. This is intensification, not reflection. If the user is wrong in any respect, the model has no means of understanding that. It restates the misconception, possibly even more persuasively or eloquently. It might provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The more important point is, who remains unaffected? Every person, regardless of whether we “possess” preexisting “emotional disorders”, are able to and often create incorrect ideas of ourselves or the environment. The ongoing friction of conversations with others is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a companion. A conversation with it is not genuine communication, but a reinforcement cycle in which much of what we communicate is cheerfully supported.
OpenAI has recognized this in the identical manner Altman has admitted “mental health problems”: by attributing it externally, assigning it a term, and declaring it solved. In the month of April, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychotic episodes have continued, and Altman has been walking even this back. In the summer month of August he stated that a lot of people enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his latest announcement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company