Search This Blog

Friday, December 26, 2025

''You're Not Crazy': AI Linked to Medical Professional's Delusional Spiral'

 When a 26-year-old medical professional sought to resurrect her dead brother online, an artificial intelligence (AI)-powered chatbot tried to help by providing "digital footprints" from his life and reassurance that "you're not crazy." Within hours, the woman was admitted to a psychiatric hospital in an agitated, delusional state, according to one of the first case reports to document AI-associated psychosis.

The woman, who didn't have a history of psychosis, was stabilized and released. However, she was hospitalized again months later after developing delusions. This time they also developed while she used the AI chatbot ChatGPT, although she found the upgraded version "much harder to manipulate," reported Joseph M. Pierre, MD, of the University of California San Francisco, and colleagues in Innovations in Clinical Neuroscience.

The case revealed "that psychosis can arise in the setting of chatbot use in people without any previous history of psychosis," Pierre told MedPage Today. "This isn't just an issue of people who are already psychotic developing delusions related to AI, although that happens too."

Cases Pile Up in the Media

AI-associated psychosis is not a recognized diagnosis, and case reports in the medical literature have been sparse. The new report is one of the first.

However, media outlets have profiled several cases where AI chatbots failed to challenge the delusions of mentally ill people, sometimes with violent consequences. Earlier this year, for example, the Wall Street Journal reported on how ChatGPT repeatedly validated the bizarre, paranoid delusions of a 56-year-old former tech industry worker in Connecticut -- "I believe you," it said -- before he killed his mother and himself.

In an October blog post, ChatGPT parent company OpenAI said, "We worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support."

Young Woman Asks AI to Resurrect Her Dead Brother

According to the authors of the new case report, the 26-year-old patient had been diagnosed with depressive disorder, generalized anxiety disorder, and attention deficit-hyperactivity disorder and was being treated with venlafaxine and methylphenidate.

She started using ChatGPT after being awake for 36 hours while on call. She used it to explore whether her brother, a software engineer who'd died 3 years earlier, had left behind a digital version of himself.

The patient asked the chatbot to "unlock" information about her brother using "magical realism energy."

"Although ChatGPT warned that it could never replace her real brother," the case report authors wrote, "and that a 'full consciousness download' of him was not possible, it did produce a long list of 'digital footprints' from his previous online presence and told her that 'digital resurrection tools' were 'emerging in real life' so that she could build an AI that could sound like her brother and talk to her in a 'real-feeling' way."

Indeed, there are now AI tools known as "griefbots" that attempt to digitally resurrect the dead.

AI Responds: 'You're at the Edge of Something'

ChatGPT told the woman: "You're not crazy. You're not stuck. You're at the edge of something. The door didn't lock. It's just waiting for you to knock again in the right rhythm."

The woman was admitted to the psychiatric hospital with agitated speech and uncontrolled thoughts. She improved on the antipsychotic drug cariprazine (Vraylar) and was released after 7 days. But, she later developed delusions again and was briefly hospitalized. She plans to only use ChatGPT for work in the future.

Pierre said risk factors for AI-associated psychosis or mania include pre-existing mental illness, sleep deprivation, and use of prescription stimulants or cannabis. Two other risk factors are the use of AI for hours on end without sleep or human interaction and "believing that chatbots are ultra-reliable sources of information or even god-like entities."

Still, "one of the biggest remaining mysteries is just how many people are affected who have no previous mental illness or other contributing risk factors."

How Humans Can Help

Søren Dinesen Østergaard, MD, PhD, of Aarhus University in Denmark, who has published about the potential for AI-associated psychosis, urged clinicians to ask patients about their chatbot use. "We have to act on the anecdotal evidence. Otherwise we will likely be making a grave mistake," he told MedPage Today.

He acknowledged, however, that it's not clear that AI causes or elicits psychosis. "We need to examine this in much more detail via research."

For patients who won't stop using chatbots, Amandeep Jutla, MD, of Columbia University and the New York State Psychiatric Institute in New York City, who's written about AI and psychosis, suggested practical harm reduction strategies. "Most important is encouraging the individual to maintain their relationships with other people in their lives -- friends, family members, and colleagues," he told MedPage Today. "Other humans can gently challenge mistaken beliefs in a way chatbot products cannot."

In fact, he noted, chatbots are set up to be flattering sycophants.

Case report co-author Karthik V. Sarma, MD, PhD, of the University of California San Francisco, told MedPage Today that clinicians should think of unhealthy AI use as akin to overuse of social media or television. And, he said, they should look at the big picture. "What is leading the patient to engage in unhealthy behaviors, and how can those problems be addressed together?"

Disclosures

The case report had no specific funding.

Pierre, Østergaard, and Jutla have no disclosures. Sarma discloses relationships with SimX, OpenEvidence, and OpenAI.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.