Search This Blog

Tuesday, January 20, 2026

AI Overuse Undermines Young Doctors’ Critical Thinking

 Generative artificial intelligence (AI) offers benefits in medicine but poses risks to medical trainees, including skill loss and outsourcing reasoning, potentially undermining their clinical competence and patient safety.

Fares Alahdab, MD, is an associate professor of biomedical informatics, biostatistics, epidemiology, and cardiology at the University of Missouri School of Medicine in Columbia, Missouri, and he is one of the authors of a BMJ Evidence-Based Medicine editorial that examines the potential risks associated with the use of generative AI in medical education.

Speaking with Medscape’s Portuguese edition, Alahdab said, “Most of the early literature and enthusiasm surrounding generative AI in medicine has emphasized its advantages, while drawbacks have largely been treated as a secondary issue or framed as a generic caution.”

Alahdab, together with his colleagues from the University of Missouri School of Medicine, felt it was necessary to clearly define and structure these risks for learners so that educators can develop concrete mitigation strategies rather than relying on vague warnings such as “use with caution.”

Risk Categories

This study identified six risk categories:Automation bias

  • Outsourcing of reasoning
  • Loss of skills
  • Racial and demographic biases
  • Hallucinations, defined as false information presented with confidence
  • Data privacy and security concerns

Among these risks to medical students, loss of skills is the most concerning. Unlike experienced physicians, who have developed mental models, pattern recognition, and reasoning habits over years of practice, students are still in the process of building these competencies.

“When they outsource information retrieval and synthesis to AI, they skip the very effort that generates lasting learning and expertise,” said Alahdab.

Experienced clinicians can often recognize when an AI suggestion is incorrect. In contrast, students have not yet internalized the parameters needed to detect subtle but potentially dangerous errors.

Another risk highlighted by the study is the outsourcing of reasoning, a process that tends to occur gradually and almost imperceptibly. AI models produce fluent, polished responses that can lead users to abandon independent information seeking, critical appraisal, and knowledge synthesis. Over time, this results in the deterioration of skills that should be continuously reinforced.

Alahdab identified specific warning signs. “A red flag is when a student can no longer explain a concept, a differential diagnosis, or a treatment plan in their own words without first checking what the AI thinks,” he said.

Other indicators of technological dependence include rarely consulting primary sources, avoiding solving exercises or drafting texts independently, and performing poorly on oral examinations without access to AI tools. “Incorporating regular periods of study and self-assessment without AI is a simple way for students to monitor whether their own reasoning remains intact,” he advised.

Second Opinion Only

Students should first attempt to complete tasks independently and then turn to AI to compare, analyze, and refine their work as needed. “Learners need to verify important clinical or scientific claims in reliable primary sources so that their own knowledge and judgment continue to grow rather than silently atrophy,” Alahdab said.

Automation bias occurs when students begin to accept incorrect recommendations generated by AI systems after prolonged use and excessive trust in technology. The article argues that generic warnings against this behavior are ineffective because they fail to address the root cause, namely, overconfidence in AI.

As a solution, the authors proposed the creation of confidence-calibration laboratories. In these settings, students can practice rejecting problematic AI-generated responses. Learners are exposed to a mix of correct and intentionally flawed answers and are required to accept, modify, or reject each response, justifying their decisions using primary sources.

Alahdab noted that discussions and pilot activities in this direction are beginning to emerge but have not yet been fully implemented in the training programs. “I am a co-chair of one of the groups at our medical school tasked with redesigning the curriculum, and we are thinking seriously about practical steps in this direction,” he said.

Rethinking Assessment

The study also proposes a paradigm shift in the assessment of students. Rather than focusing solely on the final output, educators should ask learners to demonstrate their reasoning processes. This includes a history of interactions with AI, justifications for accepting or rejecting suggestions, and verification steps using primary source data.

Educators working within competency-based assessment frameworks may be particularly receptive to this approach because it values reasoning, information seeking, and justification rather than just the final answer. The challenge acknowledged by Alahdab lies in logistics. Scalable methods are needed to evaluate prompts, justifications, and verification steps without overburdening faculty or students.

The research cited in the article shows the widespread use of AI among students, while institutional policies remain inadequate in most cases. According to the authors, an ideal policy should clearly define acceptable and unacceptable uses of AI depending on the activity, including studying, academic assignments, and clinical documentation. It should also prohibit the entry of protected health information into commercial tools for data analysis.

In addition, such policies should require the transparent disclosure of AI use in academic work, align AI use with explicit learning outcomes such as evidence appraisal and bias awareness, mandate faculty training, and include a plan for periodic review as technology and evidence evolve.

The study also addresses the biases embedded in the training data of the model. Research has shown that AI systems reproduce racial and demographic biases that must be considered in both education and assessment. Another concern is the ability of AI models to generate credible false information. The authors also warned about privacy and data security risks, particularly in healthcare settings.

https://www.medscape.com/viewarticle/ai-overuse-undermines-young-doctors-critical-thinking-2026a10001ua

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.