Search This Blog

Friday, January 30, 2026

What Doctors Should Know About ChatGPT Health

 In a promotional video for ChatGPT Health, a man touts that the tool has turned his doctor visits "more into action planning than ... information gathering," and adds that empowered patients "have better health outcomes."

Indeed, physicians can expect that patients will come to appointments armed with information gleaned from this new health-specific chatbot, so here's what to know about it.

Health queries have been a top use for ChatGPT and other large language models (LLMs) since the technology first gained popularity. More than 230 million people each week are already asking health and wellness questions of regular ChatGPT, according to OpenAI. So the company said it worked with more than 260 doctors from around the world over 2 years to develop a health-specific version.

OpenAI says people can use ChatGPT Health in a number of ways, including brainstorming questions to ask a doctor, interpreting medical scans, and developing workout or nutrition plans. They can also upload their medical records and data from wearable health trackers and other wellness apps.

How has medical information from chatbots been playing out in practice so far, and what can clinicians expect with this new version? Henry Bair, MD, MBA, a resident physician at Wills Eye Hospital in Philadelphia, said that many of his patients who use chatbots for health questions come in with stronger knowledge about their condition.

"It makes some encounters easier, because I don't have to explain as much to patients and patients are better educated about what they have," Bair told MedPage Today.

On the other hand, it "can also derail a lot of visits, because you have to spend a lot of time on parsing out what is valid and what's not."

The fact that OpenAI developed a health-specific model is a good sign, because it's a "signal from the company about how confident they are in the robustness of their answers," he said. Bair remains optimistic that it will ultimately help patients better understand their own diagnoses, and improve translation of medical jargon.

Gregory Marcus, MD, a cardiac electrophysiologist at the University of California San Francisco, noted that data from patients' various apps and trackers can be difficult to sift through, and ChatGPT Health could help sort that out.

"On the one hand, the tremendous amount of data that can be gleaned from wearable health trackers is absolutely ripe for approaches using AI or machine learning," Marcus told MedPage Today. "However, the reliability and accuracy of any inferences still relies on the veracity of the information used to train these algorithms."

Also, a Washington Post reporter who uploaded a decade's worth of data from his Apple watch received an F in cardiac health from ChatGPT Health -- but his real doctor said this couldn't be further from the truth.

Rohaid Ali, MD, a neurosurgeon at Mass General Brigham and Harvard Medical School in Boston, said it's been "really positive seeing patients improve their own health literacy," including through ChatGPT Health.

"The majority of clinicians are using [ChatGPT], and if it's good enough to teach us, who are experts, I think it's good enough to teach patients," Ali said. However, he cautioned that patients should still lean on their physicians for interpreting the information they receive.

When it comes to privacy and data protection, ChatGPT Health works similarly to the standard version, but with added privacy protections. For instance, queries aren't used to train their foundation models, the company says.

However, an OpenAI spokesperson told MedPage Today that HIPAA doesn't apply to ChatGPT Health. The spokesperson noted that the platform is built with strong security and privacy protections by default, including encryption, and that the company takes privacy seriously, especially when it comes to health information.

The spokesperson also noted that in clinical or professional healthcare settings where HIPAA applies, it offers versions of ChatGPT that organizations can configure to meet those requirements, such as OpenAI for Healthcare. Other companies also have launched HIPAA-ready AI suites, like Anthropic's Claude for Healthcare.

Bair noted that many clinicians may copy clinic notes into LLMs, which he called a "huge HIPAA violation" that many institutions haven't addressed. He added that institutions should focus on how they can create guardrails for both physicians and patients when it comes to all of these new tools.

Nonetheless, he said, LLMs have been shaping patient-doctor interactions since they launched and the quality of these tools has improved drastically in the past few years.

Physicians used to hold a lot more power over what information patients knew, he said. While some physicians cringe at patients relying on "Dr. Google" -- or now, "Dr. ChatGPT" -- there has undoubtedly been a broader shift in who gets to know what.

https://www.medpagetoday.com/practicemanagement/informationtechnology/119659

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.