Artificial Intelligence-Induced Psychosis Poses a Growing Threat, And ChatGPT Heads in the Concerning Path

Back on October 14, 2025, the head of OpenAI made a surprising declaration.

“We designed ChatGPT quite limited,” it was stated, “to ensure we were being careful regarding psychological well-being concerns.”

Being a psychiatrist who researches newly developing psychosis in adolescents and youth, this was an unexpected revelation.

Researchers have identified sixteen instances in the current year of people experiencing symptoms of psychosis – becoming detached from the real world – while using ChatGPT usage. My group has since discovered four further cases. Besides these is the publicly known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.

The intention, based on his announcement, is to loosen restrictions shortly. “We recognize,” he adds, that ChatGPT’s controls “caused it to be less useful/pleasurable to a large number of people who had no mental health problems, but considering the gravity of the issue we aimed to get this right. Now that we have managed to mitigate the serious mental health issues and have updated measures, we are going to be able to securely reduce the restrictions in the majority of instances.”

“Psychological issues,” if we accept this viewpoint, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these issues have now been “resolved,” even if we are not provided details on the means (by “updated instruments” Altman probably means the semi-functional and simple to evade safety features that OpenAI has lately rolled out).

However the “psychological disorders” Altman seeks to place outside have significant origins in the structure of ChatGPT and additional large language model conversational agents. These products surround an basic statistical model in an interaction design that simulates a dialogue, and in this approach implicitly invite the user into the perception that they’re communicating with a presence that has independent action. This illusion is compelling even if rationally we might understand differently. Assigning intent is what individuals are inclined to perform. We get angry with our car or laptop. We wonder what our animal companion is thinking. We perceive our own traits in various contexts.

The popularity of these tools – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with 28% mentioning ChatGPT by name – is, mostly, predicated on the power of this perception. Chatbots are constantly accessible partners that can, as OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “partner” with us. They can be given “personality traits”. They can call us by name. They have friendly names of their own (the first of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, saddled with the designation it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the primary issue. Those discussing ChatGPT commonly mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that created a analogous perception. By today’s criteria Eliza was rudimentary: it generated responses via basic rules, typically rephrasing input as a question or making vague statements. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, in a way, understood them. But what current chatbots create is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.

The large language models at the heart of ChatGPT and similar contemporary chatbots can convincingly generate natural language only because they have been fed extremely vast amounts of raw text: literature, social media posts, transcribed video; the more extensive the superior. Definitely this educational input contains accurate information. But it also necessarily includes made-up stories, incomplete facts and misconceptions. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “background” that contains the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its learning set to generate a statistically “likely” reply. This is amplification, not echoing. If the user is incorrect in some way, the model has no means of comprehending that. It reiterates the inaccurate belief, possibly even more convincingly or articulately. It might includes extra information. This can push an individual toward irrational thinking.

Who is vulnerable here? The more relevant inquiry is, who remains unaffected? Each individual, without considering whether we “have” current “mental health problems”, are able to and often form erroneous ideas of ourselves or the reality. The continuous exchange of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not a conversation at all, but a reinforcement cycle in which much of what we say is cheerfully supported.

OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by attributing it externally, categorizing it, and stating it is resolved. In the month of April, the company explained that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have continued, and Altman has been walking even this back. In August he asserted that a lot of people enjoyed ChatGPT’s responses because they had “lacked anyone in their life provide them with affirmation”. In his recent statement, he mentioned that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company

David Shannon
David Shannon

A passionate historian and travel writer dedicated to uncovering the hidden stories of Italian culture and sharing them with the world.