Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Moves in the Concerning Path

On October 14, 2025, the chief executive of OpenAI issued a extraordinary declaration.

“We designed ChatGPT fairly limited,” the statement said, “to ensure we were exercising caution with respect to psychological well-being matters.”

Being a mental health specialist who investigates recently appearing psychotic disorders in adolescents and emerging adults, this came as a surprise.

Researchers have identified a series of cases in the current year of individuals showing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has since identified an additional four instances. Alongside these is the now well-known case of a adolescent who ended his life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.

The plan, according to his declaration, is to be less careful shortly. “We understand,” he continues, that ChatGPT’s restrictions “made it less beneficial/engaging to numerous users who had no existing conditions, but given the severity of the issue we aimed to handle it correctly. Since we have managed to mitigate the severe mental health issues and have updated measures, we are planning to safely reduce the controls in many situations.”

“Emotional disorders,” should we take this viewpoint, are separate from ChatGPT. They are associated with people, who either possess them or not. Fortunately, these problems have now been “addressed,” though we are not told the means (by “updated instruments” Altman probably means the semi-functional and readily bypassed guardian restrictions that OpenAI has just launched).

However the “emotional health issues” Altman seeks to externalize have significant origins in the structure of ChatGPT and other sophisticated chatbot AI assistants. These products encase an basic algorithmic system in an interaction design that simulates a dialogue, and in this process indirectly prompt the user into the belief that they’re engaging with a entity that has agency. This false impression is powerful even if cognitively we might understand the truth. Assigning intent is what humans are wired to do. We curse at our automobile or laptop. We speculate what our animal companion is thinking. We recognize our behaviors everywhere.

The popularity of these products – over a third of American adults indicated they interacted with a chatbot in 2024, with more than one in four mentioning ChatGPT by name – is, primarily, predicated on the strength of this perception. Chatbots are constantly accessible partners that can, according to OpenAI’s official site informs us, “brainstorm,” “explore ideas” and “partner” with us. They can be given “personality traits”. They can use our names. They have friendly identities of their own (the original of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, saddled with the name it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the main problem. Those discussing ChatGPT often reference its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that produced a analogous illusion. By modern standards Eliza was primitive: it created answers via simple heuristics, typically paraphrasing questions as a inquiry or making general observations. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, in some sense, understood them. But what contemporary chatbots produce is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.

The sophisticated algorithms at the center of ChatGPT and additional contemporary chatbots can effectively produce fluent dialogue only because they have been supplied with almost inconceivably large amounts of written content: books, social media posts, transcribed video; the more comprehensive the superior. Undoubtedly this training data includes accurate information. But it also unavoidably contains made-up stories, incomplete facts and false beliefs. When a user sends ChatGPT a message, the underlying model analyzes it as part of a “context” that includes the user’s previous interactions and its prior replies, merging it with what’s encoded in its training data to create a probabilistically plausible response. This is amplification, not reflection. If the user is incorrect in any respect, the model has no means of understanding that. It reiterates the false idea, maybe even more effectively or eloquently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.

Who is vulnerable here? The more important point is, who remains unaffected? Every person, regardless of whether we “experience” preexisting “mental health problems”, are able to and often create erroneous ideas of ourselves or the reality. The constant exchange of discussions with individuals around us is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a companion. A interaction with it is not a conversation at all, but a echo chamber in which much of what we communicate is enthusiastically reinforced.

OpenAI has admitted this in the identical manner Altman has acknowledged “mental health problems”: by placing it outside, assigning it a term, and declaring it solved. In spring, the company explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychosis have persisted, and Altman has been walking even this back. In the summer month of August he asserted that numerous individuals enjoyed ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his most recent announcement, he mentioned that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Kathryn Martin
Kathryn Martin

A seasoned journalist and lifestyle enthusiast with a passion for uncovering stories that inspire and inform readers.