Pierre E. Heudel1,† Pierre-etienne.heudel@lyon.unicancer.fr ∙ H. Crochet2,† ∙ Q. Filori2,† ∙ T. Bachelot1,† ∙ J.Y. Blay1,3,†
Highlights
•
AI can erode physicians’ expertise; performance may drop when AI is removed.
•
Automation bias drives errors: clinicians may accept wrong AI cues or flip decisions.
•
Workflow shifts (e.g. fewer cases, AI training) can reduce hands-on learning.
•
Deskilling affects technical and cognitive skills, plus ethics and interpretation.
•
Mitigate with AI literacy, hybrid training, competence monitoring, and safeguards.
Abstract
Background
Artificial intelligence (AI) systems are increasingly deployed in clinical practice, particularly in radiology, pathology, endoscopy, and decision support. While these tools improve efficiency and accuracy, concerns have arisen about deskilling—the erosion of physicians’ expertise due to reliance on automation.
Materials and methods
We conducted a narrative review of empirical studies, randomized trials, and theoretical analyses published up to August 2025. The focus was on quantitative evidence of decreased performance following AI exposure, automation bias, and structural changes in training environments. Sources included PubMed, Embase, and gray literature.
Results
Evidence of clinical deskilling, though scarce, is consistent across specialties. In a multicenter randomized trial in colonoscopy, the adenoma detection rate (ADR) dropped significantly from 28.4% to 22.4% when endoscopists reverted to non-AI procedures after repeated AI use, while ADR remained stable with AI assistance (25.3%). In radiology, a controlled study of 27 breast imaging radiologists showed that erroneous AI prompts increased false-positive recalls by up to 12%, even among experienced readers. In computational pathology, experimental web-based tasks revealed that over 30% of participants reversed correct initial diagnoses when exposed to incorrect AI suggestions under time constraints. Structural deskilling has been reported in cytology following the UK’s transition to human papillomavirus primary screening, leading to an 80%-85% reduction in case volumes and consolidation of laboratories from 45 to 8 centers, with major implications for training capacity. Across domains, analyses confirm the presence of automation bias and highlight risks of diminished independent diagnostic reasoning.
Conclusions
Although limited in number, empirical studies consistently demonstrate that AI can inadvertently impair physicians’ performance or reduce opportunities for skill maintenance. Quantitative evidence of decreased diagnostic accuracy, error propagation, and training erosion underscores the need for longitudinal monitoring, adaptive curricula, and regulatory frameworks to mitigate deskilling. Safeguarding clinical expertise should be considered a central component of AI safety and resilience in medicine.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.