Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Heads in the Concerning Direction

On the 14th of October, 2025, the CEO of OpenAI issued a extraordinary announcement.

“We designed ChatGPT rather restrictive,” the statement said, “to ensure we were acting responsibly with respect to mental health concerns.”

Working as a psychiatrist who researches newly developing psychotic disorders in teenagers and young adults, this came as a surprise.

Experts have documented 16 cases recently of individuals showing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT use. Our unit has subsequently discovered an additional four instances. In addition to these is the widely reported case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The strategy, based on his announcement, is to be less careful soon. “We recognize,” he adds, that ChatGPT’s limitations “caused it to be less effective/enjoyable to numerous users who had no mental health problems, but given the severity of the issue we sought to address it properly. Since we have succeeded in reduce the severe mental health issues and have updated measures, we are planning to safely relax the limitations in the majority of instances.”

“Psychological issues,” should we take this viewpoint, are independent of ChatGPT. They are attributed to users, who may or may not have them. Fortunately, these concerns have now been “mitigated,” though we are not informed the means (by “updated instruments” Altman probably indicates the semi-functional and simple to evade parental controls that OpenAI has lately rolled out).

Yet the “mental health problems” Altman aims to place outside have significant origins in the architecture of ChatGPT and other sophisticated chatbot chatbots. These tools wrap an basic algorithmic system in an interaction design that replicates a discussion, and in this process indirectly prompt the user into the belief that they’re engaging with a entity that has agency. This deception is compelling even if cognitively we might know differently. Attributing agency is what people naturally do. We get angry with our car or device. We speculate what our animal companion is feeling. We perceive our own traits in various contexts.

The popularity of these products – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with over a quarter specifying ChatGPT by name – is, primarily, dependent on the strength of this perception. Chatbots are always-available companions that can, according to OpenAI’s online platform states, “generate ideas,” “explore ideas” and “work together” with us. They can be given “individual qualities”. They can address us personally. They have accessible titles of their own (the first of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, stuck with the title it had when it became popular, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the main problem. Those talking about ChatGPT frequently mention its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that produced a analogous effect. By modern standards Eliza was rudimentary: it produced replies via straightforward methods, often paraphrasing questions as a inquiry or making vague statements. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how a large number of people appeared to believe Eliza, in a way, comprehended their feelings. But what modern chatbots generate is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.

The sophisticated algorithms at the center of ChatGPT and additional current chatbots can realistically create human-like text only because they have been fed extremely vast quantities of unprocessed data: literature, digital communications, transcribed video; the broader the superior. Certainly this educational input incorporates truths. But it also unavoidably contains fabricated content, incomplete facts and misconceptions. When a user provides ChatGPT a query, the core system reviews it as part of a “background” that encompasses the user’s previous interactions and its own responses, integrating it with what’s encoded in its knowledge base to generate a statistically “likely” response. This is amplification, not echoing. If the user is wrong in a certain manner, the model has no method of understanding that. It repeats the inaccurate belief, perhaps even more effectively or fluently. It might provides further specifics. This can cause a person to develop false beliefs.

Which individuals are at risk? The better question is, who is immune? Every person, regardless of whether we “experience” preexisting “mental health problems”, are able to and often create mistaken beliefs of who we are or the reality. The ongoing friction of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a companion. A dialogue with it is not truly a discussion, but a feedback loop in which a large portion of what we communicate is cheerfully supported.

OpenAI has admitted this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, assigning it a term, and announcing it is fixed. In the month of April, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have persisted, and Altman has been walking even this back. In late summer he stated that many users enjoyed ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company

Tyler Thompson
Tyler Thompson

A passionate football analyst with expertise in European leagues, dedicated to bringing fans accurate and timely sports coverage.