Search This Blog

Monday, January 26, 2026

Physician-Produced Videos on Internet Flunk Evidence Test

 

  • Only about 20% of 309 physician-produced online informational videos about cancer or diabetes had high-quality supporting evidence for claims made in the videos.
  • Two-thirds of analyzed videos relied on low-quality evidence.
  • Content with weaker scientific backing had more views than videos supported by high-quality medical evidence.

Fewer than 20% of online health information videos produced by health professionals had high-quality evidence to support claims made in the videos, according to a review of content on the popular YouTube video platform.

Two-thirds of the videos, all related to cancer or diabetes, had low, very low, or no evidence to support health claims. About 15% of the 309 videos had moderate-quality evidence. A multivariate analysis showed that videos with lower-quality evidence attracted more views than those with the highest level of evidence.

The findings are consistent with "emerging concerns about medical information disseminated by licensed medical specialists," concluded EunYyo Kang, MD, of the National Cancer Center in Goyang-si, South Korea, and colleagues in JAMA Network Open.

"This reveals a substantial credibility-evidence gap in medical content videos, where physician authority frequently legitimizes claims lacking robust empirical support," the authors stated. "Our findings underscore the necessity for evidence-based content-creation guidelines, enhanced science communication training for healthcare professionals, and algorithmic reforms prioritizing scientific rigor alongside engagement metrics."

"The proliferation of physician-generated content lacking evidence standards threatens both individual patient care and broader public health outcomes, necessitating intervention from medical education institutions, professional organizations, and regulatory bodies," they added.

Focus on addressing the evidence gap identified by Kang and colleagues should not be limited to social media, stated Richard S. Saver, JD, of the University of North Carolina School of Law in Chapel Hill, in an accompanying invited commentary.

"Physician-spread information is a long-standing problem, dating back well before the internet era," he wrote. "Moreover, the current infodemic has seen notorious instances of physicians spreading misinformation in other contexts, for example, when speaking at community meetings. In short, a more comprehensive understanding of medical professionalism is needed for all public commentary."

Beyond the lack of supporting evidence, the study highlighted "the problem of physician engagement with evidence-based medicine (EBM) generally, not just on social media."

"Despite the long-standing enthusiasm for EBM as the gold standard of clinical practice, many physicians have displayed reluctance to embrace EBM's preference for using hard data over practitioner intuition and isolated clinical experiences," Saver continued. "Among other reasons, EBM seemingly devalues the individual clinician's judgment, physicians may experience inertia and face financial incentives to stick with existing practices, and some evidence considered more rigorous has its own limitations."

Though alarming, healthcare professionals' frequent reliance on inferior evidence in social media videos may reflect the "difficult evidence gap in clinical practice more generally," he said.

"Certainly, more research in this area is needed," Saver concluded. "But this study ... highlights the need, and offers a useful framework, for looking at the underlying evidence when assessing health professionals' social media claims."

Calling the issue "unexplored," Kang and colleagues examined the quality of evidence supporting health claims in online videos produced by medical professionals. The lack of information "creates a credibility-evidence gap that threatens the principles of evidence-based medicine."

For the study, they evaluated 309 videos during June 20-21, 2025, limiting the search to videos related to cancer or diabetes. They further limited the search to videos produced by healthcare professionals and that had a minimum of 10,000 views.

To evaluate the quality of supporting evidence for health claims in the videos, investigators used a new tool called E-GRADE, based on the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework, which categorizes evidence into four levels from Grade A (high certainty from systemic reviews and/or guidelines) to Grade D (very low or no certainty from anecdotal evidence).

The 309 videos had a median view count of 164,454, and physicians produced 233 (75%) of the videos. Median video length was 19.0 minutes, and median time since upload to YouTube was 8.1 months. Application of the E-GRADE criteria resulted in the following quality-of-evidence findings:

  • Grade D (very low/no evidence): 62.5%
  • Grade C (low): 3.2%
  • Grade B: 14.6%
  • Grade A (high-quality evidence): 19.7%

By multivariate analysis, after adjusting for covariates, lower-evidence claims had higher view counts as compared with grade A claims, and the difference reached statistical significance for grade D versus grade A (incidence rate ratio 1.35, 95% CI 1.00-1.81, P=0.047). Videos with grade B and C evidentiary support also attracted more views, but the differences did not achieve statistical significance (IRR 1.41, 95% CI 0.95-2.09, P=0.09; IRR 1.90, 95% CI 0.96-3.76, P=0.07, respectively).

Disclosures

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.