I recently wrote about an MIT Media Lab study that raised concerns about "cognitive debt," a term referring to the idea that repeated reliance on artificial intelligence (AI) for complex tasks may weaken learning, memory, and critical thinking. In the MIT study, this concern was associated with reduced EEG activity among participants who relied heavily on AI.
That research was soon followed by another troubling finding reported in The Lancet Gastroenterology & Hepatology. In the study, seasoned gastroenterologists -- averaging nearly three decades of experience and more than 2,000 colonoscopies each -- adopted AI-assisted polyp detection. Initially, the results were reassuring. When AI was active, adenoma detection improved. But when those same physicians later performed colonoscopies without AI, their detection rates dropped sharply, from 28.4% to 22.4%. The proposed explanation was that clinicians had grown accustomed to the AI's visual cues and paid less attention once those cues disappeared.
The word used to describe this phenomenon is "de-skilling." Unsurprisingly, the result has unsettled many doctors. If AI makes endoscopists worse at their jobs once it is removed, should we be worried about its widespread deployment across medicine, where virtually every specialty is cognitively demanding?
The short answer is yes: this deserves serious attention. The longer answer is that we should be very careful about what, or whom, we blame.
A Signal Worth Studying -- But Not Panicking Over
Let's start with what this study is -- and is not.
It is a retrospective, observational study, not a randomized trial. The exposure window -- 3 months before and 3 months after AI implementation -- was short. The cohort was limited to four endoscopy centers in Poland, comprising 1,443 patients. Patient populations may not have been identical. And the proposed mechanism is speculative, not proven.
But here is what makes the study genuinely important: it is one of the first to ask what happens to clinician skills when AI is not available.
We have hundreds of studies asking whether AI improves detection, speed, or accuracy. I'm not aware of many asking what prolonged AI use does to the human operator's baseline competence.
That imbalance matters, because AI is not always available. Systems go down. Vendors fail. Hospitals switch platforms. Rural or resource-limited settings may never have access in the first place.
A physician who cannot function competently without AI is not practicing augmented medicine -- they are practicing dependent medicine.
That is a real risk.
De-skilling Didn't Start With AI
Here is where perspective matters.
Medicine has struggled with skill drift since long before large language models or computer vision entered the exam room. Procedural volume requirements exist for a reason. Board recertification exists for a reason. We already acknowledge -- often uncomfortably -- that skills wither without deliberate practice.
We did not blame stethoscopes for dulling percussion skills. We did not blame calculators for weakening mental arithmetic. We did not blame imaging for reducing reliance on physical examination findings. And we certainly did not blame books for making self-education optional. Books expanded access to knowledge; they did not excuse physicians from the responsibility to read, reflect, and think critically. If anything, they raised expectations.
AI belongs in that same category. The problem is not that AI assists detection. The problem arises when clinicians allow assistance to replace attention -- attention that is a prerequisite for learning. That is not a technological failure. It is a professional one.
Cognitive Debt: A Useful Metaphor, Not a Verdict
The MIT study on AI and "cognitive debt" sparked intense debate for similar reasons. One camp argued that skills decay when they are no longer exercised, invoking well-known principles of neuroplasticity. Others countered that the observed EEG changes (reduced activity) may reflect cognitive economy rather than loss -- the brain reallocating effort as AI assumes portions of the task that are mechanical or organizational, rather than core reasoning itself.
The brain adapts to demand. Learning requires active engagement first; when AI performs the hardest parts of a task before that engagement occurs, the learning signal is blunted or lost. What matters, then, is not whether AI is used, but when and how it is used.
Experienced clinicians who think first and consult tools second are unlikely to lose core skills. Those who defer primary cognition to technology may lose those skills or never fully develop them.
This is not new. It is simply more visible now.
The Lancet Study's Real Lesson
The most important insight from the colonoscopy study is not that AI "made doctors worse." It is that implementation matters.
If clinicians unconsciously outsource vigilance -- waiting for a "green box" rather than actively scanning -- then AI integration has failed at the human-factors level. The answer is not to retreat from AI, but to design its use so that it reinforces attention rather than replaces it.
That might include:
- Periodic practice without AI to preserve baseline skills
- Training programs that emphasize fundamentals alongside AI use
- Interfaces that require active confirmation rather than passive acceptance
- Credentialing standards that assume AI will not always be present
These concerns apply even more acutely to physicians in training. If AI is introduced before foundational skills are established, the risk is not de-skilling but never skilling at all. Medical education therefore deserves special attention when discussing how -- and when -- AI should be used.
None of this requires abandoning AI. It requires respecting the limits of automation. If a physician becomes less competent when a tool is removed, the failure is not that the tool existed. The failure is that competence was allowed to atrophy unchecked. AI does not absolve clinicians of vigilance any more than textbooks absolved them of thinking. It simply raises the stakes for how we train, practice, and self-monitor. Tools matter. But professionalism matters more.
The Right Question Going Forward
So, does this study demonstrate a real risk, or is it an overblown concern?
It is a real risk -- if we are complacent. It is an overblown fear -- if we are deliberate.
The correct response is not to slow AI adoption reflexively, nor to dismiss early warning signs as technophobia. It is to ask better questions: How do we preserve human competence in AI-rich environments? What skills must remain explicitly human, even when automation is available? How do we train physicians to use AI as a partner rather than a crutch?
If we fail to ask those questions, de-skilling will not be an unintended side effect -- it will be an avoidable outcome.
Arthur Lazarus, MD, MBA, is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book, a novel, is Against the Tide: A Doctor's Battle for an Undocumented Patient.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.