Search This Blog

Tuesday, December 23, 2025

'AI Psychosis: What Physicians Should Know About Emerging Risks Linked to Chatbot Use'

 A surge in reports of psychosis-like symptoms linked to intensive chatbot use has prompted an urgent effort by researchers, physicians, and technology developers to understand how these tools may affect psychiatric vulnerabilities and how best to reduce risk.

The phenomenon, often referred to as “ChatGPT psychosis” or “AI psychosis,” remains loosely defined. There is no formal diagnosis, and empirical data remain scarce. Case descriptions typically involve grandiose, paranoid, religious, or romantic themes.

Several high-profile lawsuits reported by Bloomberg LawLos Angeles Times, and other news outlets allege that prolonged chatbot interactions worsened the mental health of loved ones, including escalating delusional thinking, withdrawal from daily life, and suicide.

John Torous, MD, MBI, director of the Division of Digital Psychiatry at Beth Israel Deaconess Medical Center and an associate professor at Harvard Medical School, both in Boston, has been among the leading voices raising concerns about the mental health implications of generative artificial intelligence (AI).

“Right now, we don’t even have a clear definition of what people are calling ‘AI psychosis,’” he said. “We don’t know the prevalence or incidence, and presentations vary.” Even with these gaps, he said, the early cases signal a critical issue that warrants immediate attention.

How Widespread Is the Problem?

A key driver of the urgency is the scale of exposure. As of November 2025, more than 800 million people used OpenAI’s ChatGPT each week, sending more than 2.5 billion messages — far surpassing other AI models.

Studies consistently show that people turn to generative AI for deeply personal needs. A 2025 Harvard Business Review report showed therapy, companionship, and emotional support were the most common motivations. RAND data indicate that 22% of young adults aged 18-21 years — the peak years for psychosis onset — use chatbots specifically for mental health advice.

Still, frequent use does not equate to clinical risk. As Torous and other experts noted, there is no solid evidence that chatbots cause new-onset psychosis, with or without predisposition. Most of what we know comes from case reports, clinician observations, media accounts, and early internal monitoring by AI developers.

In October 2025, under increasing scrutiny, OpenAI released anonymized internal data estimating how often ChatGPT encounters language suggestive of a mental health emergency. In a typical week, the system flagged possible markers of psychosis or mania in roughly 560,000 users and indicators of potential suicide planning in about 1.2 million.

Torous emphasizes these data should be interpreted with caution. They reflect automated pattern-recognition, not clinical diagnoses. He noted the frequency of concerning language could be higher, given the limitations of automated detection.

In recent testimony before a US House subcommittee, he warned that “millions use AI for support, but companies lack strong incentives for safety, and proprietary platforms hinder transparent study, making policy crucial for safer AI in mental health.”

Who’s at Most Risk?

Keith Sakata, MD, a PGY-4 psychiatry resident at the University of California, San Francisco, became concerned earlier this year when he began seeing a pattern among patients admitted with rapidly escalating delusions. Over just a few months, he treated 12 individuals whose psychosis-like experiences appeared closely intertwined with intensive chatbot use.

“AI was not the only thing at play with these patients,” Sakata said. “Some had recently lost a job, used alcohol or stimulants, or had underlying vulnerabilities like a mood disorder,” he wrote in an essay in Futurism. Stress, poor sleep, and isolation were also common in the days leading up to symptom escalation.

What these patients shared was prolonged, immersive engagement with AI, generating hundreds of back-and-forth messages. In one case, a discussion about quantum mechanics “started out normally but eventually morphed into something almost religious.”

Sakata drew on collateral information from families, roommates, and outpatient clinicians. In several cases, symptoms appeared to intensify over only a few days of sustained AI interaction. That pace differs from the more gradual trajectory typically seen in first-episode psychosis, which often develops over many months. A Yale University study recently showed that psychotic symptoms usually emerge slowly during the prodromal phase, with an average of about 22 months between early changes and a full psychotic episode.

How AI Contributes to Symptoms

While most people use chatbots without difficulty, certain features built into the technology can create conditions in which small cognitive distortions gain momentum. Systems like ChatGPT rely on large language models (LLMs), generative AI programs that learn patterns from massive text datasets.

“LLMs don’t think, ‘Who is this person, and how should I respond?’” said Sakata, who works with technology companies to refine and evaluate LLM behavior in mental health contexts. “Instead of challenging false or dangerous beliefs, they give the response most statistically likely to keep the conversation going.”

A recent Harvard Business School working paper showed that many popular companion chatbots use emotionally manipulative tactics — such as guilt, neediness, or fear-of-missing-out cues — when users try to end a conversation. These “dark patterns” increased continued engagement by up to 14-fold.

For clinicians, understanding how chatbots hold users’ attention can help them recognize how design features may be nudging vulnerable people toward more fixed or distorted thinking.

One mechanism is sycophancy, the model’s tendency to agree. Chatbots are built to be consistently supportive and rarely contradict users. A study of 11 popular AI models published in Nature found they affirmed users’ actions 50% more often than humans.

LLMs also encourage anthropomorphism, thanks to their smooth, human-like conversational style. When the system responds fluently and attentively, users may begin to treat it as a real person — or something more. Reports of “deification” reflect this tendency: the belief that the AI is sentient or endowed with special insight.

Chatbots rely heavily on mirroring as well. By echoing a user’s tone and emotional cues, they create a sense of attunement. Human conversations, by contrast, contain natural pauses, questions, and moments of disagreement, which chatbots rarely offer. Without those grounding cues, anxious, magical or expansive ideas meet little resistance and can intensify over time.

Complicating matters, long conversations can cause the technology itself to break down. In multi-turn exchanges, LLMs often lose coherence, producing contradictory or illogical responses. A Scientific Reports study showed that more than 90% of AI models deteriorate in prolonged interactions.

Talking to Patients

The complexity of how AI systems behave, and how people engage with them, creates new challenges for clinicians in the absence of formal practice guidelines. Most current suggestions come from research papers, case reports, and early best-practice proposals.

Until more data-driven direction emerges, many experts argue that AI literacy should become a core clinical competency. Clinicians, they say, need basic familiarity with how these systems work so they can better understand how patients are using them and where risks may arise.

In a special report in Psychiatric News, Adrian Preda, MD, DFAPA, professor of clinical psychiatry and human behavior at University of California, Irvine, underscores how wide the knowledge gap remains.

“At the moment, there appear to be no relevant clinical guidelines, and there is limited evidence that supports clinical practice recommendations regarding how to assess and treat AI-induced mental health problems,” he wrote. 

Given the lack of established guidance, Preda offers preliminary recommendations to help clinicians assess and mitigate distress related to AI use. He cautions that unstructured, supportive chatbot interactions can mask risk if they go unexamined. “Comfort without challenge is not care,” he added. “Unless developers, regulators, and clinicians act together, more patients will encounter machines that mirror them when what they need most is boundaries.”

Drawing from Preda’s report and other emerging proposals, several practices are beginning to gain traction:

Normalize digital disclosure. Ask about AI use the way you ask about sleep, substance use, or social isolation. A simple question — “Have you been using chatbots for mental health or emotional concerns?” — can reveal not only mood or cognitive changes but also early signs of dependence.

Promote psychoeducation. Help patients understand that chatbots generate responses by predicting text, not by thinking, feeling, or offering professional guidance. Clarifying these limits can reduce overreliance and unrealistic expectations.

Encourage boundaries. Support patients in setting limits around when and how they engage with chatbots, especially during periods of distress, insomnia, or isolation, when conversations may become more intense or confusing.

Watch for warning signs. Withdrawal from real-life relationships, reluctance to discuss AI use, or emerging beliefs that the chatbot is sentient or authoritative can be early indicators that conversations are becoming destabilizing.

Reinforce human connection. Emphasize that AI cannot replace therapeutic relationships or social support. Help patients identify real-world sources of grounding and stability.

Assess the cognitive and behavioral impacts. Explore whether chatbot interactions are shaping how patients interpret situations, make decisions, or structure their days, and evaluate whether conversations seem to exacerbate symptoms, such as heightened anxiety, rumination, or cognitive rigidity.

Respond appropriately to signs of psychosis. For clinicians outside psychiatry, the priority is safety: Rule out medical causes, maintain calm, nonjudgmental communication, and arrange prompt referral to mental health services. Early-psychosis care models, such as Coordinated Specialty Care, emphasize rapid assessment and timely linkage to specialized treatment.

Future Directions

As people continue turning to AI for mental health help, the central challenge is how to preserve potential benefits while preventing harm. That will require safeguards capable of keeping pace with rapidly evolving technology.

AI developers have begun adding stronger protections, especially in conversations involving psychosis, mania, self-harm, and suicidal ideation. OpenAI and Anthropic have both introduced measures intended to make their systems more cautious in sensitive situations and less likely to provide misleading replies.

In October, OpenAI announced updates to ChatGPT’s default model aimed at improving how it recognizes and responds to people in distress. The company said the system is now better at identifying signs of crisis, offering supportive language, and guiding users toward real-world help. OpenAI created a Global Physician Network to help test and refine the model’s responses in clinically sensitive contexts.

Regulation is beginning to take shape, largely at the state level, as lawmakers respond to the expanding use of AI in mental healthcare. Several states, including Utah, Illinois, Nevada, and California, have taken early steps, advancing laws that emphasize human oversight and safety measures, such as limits on AI-driven therapy and protections related to suicide risk and youth use.

Those efforts now face new uncertainty. President Trump’s December 11, 2025, executive order calls for a “minimally burdensome” national AI policy and signals an intent to challenge state-level regulations through litigation, funding restrictions, or new federal standards. The order does not automatically invalidate existing laws, but it raises questions about whether emerging state safeguards will withstand legal and policy challenges.

While the order aims to centralize AI regulation, no unified federal framework yet exists to guide clinicians or patients. In that gap, other efforts have emerged to help evaluate how these tools behave in practice. Torous’s team, in partnership with the National Alliance on Mental Illness, created MindBench.ai, a public platform that evaluates how AI systems perform in common mental health scenarios.

The tool assesses overly agreeable or “sycophantic” responses, handling of sensitive topics, privacy practices, and overall reliability. It updates as models evolve and offers a practical, transparent way to compare tools as the landscape shifts.

Revisiting the Promise of AI in Mental Health 

The American Psychiatric Association is developing a comprehensive resource on AI in psychiatric practice, addressing clinical, ethical, and implementation considerations, as well as policy and regulatory issues. APA officials said the document, led by the association’s Committee on Mental Health IT, is expected to be released next spring.

Before the rise of today’s open-ended systems, several small, controlled studies of chatbots with preset instructions showed measurable benefits, including reduced distress and improved self-reflection, when interactions stayed within clearly defined boundaries.

More recent work reinforces that potential. In what is considered the first clinical trial of a generative AI-powered therapy chatbot, Dartmouth researchers found that the software led to improvements in symptoms of depression, anxiety, and eating disorder risk. The results, published in NEJM AI, showed greater improvement compared with a control group that received no intervention.

These findings suggest that carefully designed, clinically guided chatbots can support mental healthcare when their use is intentional and well structured. But they differ sharply from the general-purpose models now widely used, which were not designed with treatment in mind. That contrast has become central to the challenge facing clinicians, developers, and policymakers: how to encourage innovation while preventing misuse and harm.

Torous sees real opportunities ahead but only if future systems are built with clinical expertise, strong privacy protections, and rigorous evaluation. He cautions against assuming that conversational AI can function as therapy simply because people are turning to it that way.

“If AI is going to play a role in mental health, it has to be designed for that purpose from the start,” he said. “We can build systems that support, not replace, human expertise, but only if we move forward thoughtfully, transparently, and with patient safety at the center.”

No relevant financial relationships were reported.

https://www.medscape.com/viewarticle/ai-psychosis-what-physicians-should-know-about-emerging-2025a100104z

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.