Search This Blog

Friday, October 3, 2025

AI Eroding Cognitive Skills in Doctors: How Bad Is It?

 2025 brought a strange convergence: College essays and colonoscopies both demonstrated what can happen when artificial intelligence (AI) leads the work.

First came the college data: An MIT team reported in June that when students used ChatGPT to write essays, they incurred cognitive debt and “users consistently underperformed at neural, linguistic, and behavioral levels” causing a “likely decrease in learning skills.” 

Then came the clinical echo. In a prospective study from Poland published last month in The Lancet Gastroenterology and Hepatology, gastroenterologists who’d grown accustomed to an AI-assisted colonoscopy system appeared to be about 20% worse at spotting polyps and other abnormalities when they subsequently worked on their own. Over just 6 months, the authors observed that clinicians became “less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.”

For medicine, that mix sparks some uncomfortable questions. 

What happens to a doctor’s mind when there’s always a recommendation engine sitting between thought and action? How quickly do habits of attention fade when the machine is doing the prereading, the sorting, even the first stab at a diagnosis? Is this just a temporary setback while we get used to the tools, or is it the start of a deeper shift in what doctors do?

Like a lot of things AI-related, the answers depend on who you ask.

A Coin With Many Sides

On the surface, any kind of cognitive erosion in physicians because of AI use is alarming. It suggests some disengagement with tasks on a fundamental level and even automation bias— over-reliance on machine systems without even knowing you’re doing it.

Or does it? The study data “seems to run counter to what we often see,” argues Charlotte Blease, PhD, an associate professor at Uppsala University, Sweden, and author of Dr. Bot: Why Doctors Can Fail Us―and How AI Could Save Lives. “Most research shows doctors are algorithmically averse. They tend to hold their noses at AI outputs and override them, even when the AI is more accurate.”

If clinicians aren’t defaulting to blind trust, why did performance sag when the AI was removed? One possibility is that attitudes and habits change with sustained exposure. “We may start to see a shift in some domains, where doctors do begin to defer to AI,” she says. “And that might not be a bad thing. If the technology is consistently better at a narrow technical task, then leaning on it could be desirable.” The key, in her view, is the “judicious sweet-spot in critical engagement.”

And the social optics can cut the other way. A recent Johns Hopkins Carey Business School randomized experiment with 276 practicing clinicians found that physicians who mainly relied on generative AI for decisions incurred a “competence penalty” in colleagues’ eyes. They were viewed as less capable than peers who didn’t use AI, with only partial relief when AI was framed as a second opinion.

If you accept the AI, you risk status; if you override it, you risk accuracy. That’s a design and governance problem, not just an attitude problem.

‘Erosion’ Depends on the Task

Nigam Shah, MBBS, PhD, professor of medicine at Stanford University, California, and chief data scientist for Stanford Health Care, argues the starting point is wrong. 

“The question to ask is how faithfully can AI tools take work off completely from my plate,” he says. “For example, when I trained, we used to count white blood cells manually in a microscope using a cell counter. Today we use a cell sorter. We do not ask whether a clinician’s skill in doing the differential cell count manually has atrophied.”

In other words, not every task deserves equal worry about erosion. Shah recommends avoiding diagnosis as the first frontier. “There is so much low-hanging fruit of mind-numbing drudgery to fix. Why do we want to go straight to the hardest tasks, like diagnosis acumen and treatment planning, for which we spend 10 years training the physician?”

Ethan Goh, MD, executive director of Stanford ARISE (AI Research and Science Evaluation), agrees that reframing the job list changes the stakes. But he also insists that “different” doesn’t have to mean “diminished,” which “implies that a doctor’s cognitive skills will deteriorate,” he says. The route to “different” starts with explicit task mapping. 

“For example, once we start mapping out different ways in which doctors are using AI — this is an ongoing study direction at Stanford and Harvard for which ARISE won a Macy’s AI in Education grant— we can start measuring the performance of AI alone versus AI with doctors for each of these tasks,” he says.

Then you assign attention with purpose. “We can make an educated decision on which tasks to apply doctors’ limited cognitive efforts on and which tasks we necessarily have to keep training and testing medical students, residents, and doctors on, so that their cognitive skills in these areas do not deteriorate,” Goh continues. 

He uses aviation as an analogy, pointing out that autopilot has made flying safer, “but pilots still log manual ‘stick-and-rudder’ time in simulators, practice failure modes, and undergo regular proficiency checks,” Goh says.

Where Skills Start to Slip

Goh argues it’s not that doctors’ skills simply vanish; the real challenge is figuring out which ones must be protected. “Robots and automated procedures are still quite some time away compared to knowledge work,” he says.

He does flag a near-term cognitive risk. “As AI becomes so good, or more than 99% accurate, human experts defer to the AI so much and become susceptible to automation bias and anchoring bias,” he says. That’s not a reason to stop; it’s a reason to shape how and when assistance appears.

If adoption is outpacing preparation, that’s a leadership and education gap to close, says Bertalan Meskó, PhD, director of The Medical Futurist Institute, Budapest, Hungary. “AI, and especially those developing AI, don’t care about physicians’ skills but focus on replacing data-based and repetitive tasks to reduce the burden on medical professionals,” he says. “It would be the responsibility of those designing medical curricula to make sure that while physicians learn to use a range of AI-based technology, their skills and understanding will not erode.”

The colonoscopy result underscores the urgency, and this isn’t just a framing dispute. 

“The use of stethoscopes has led to much higher levels of confidence and efficiency for medical professionals in diagnosing heart and lung conditions,” Meskó says. “The use of AI should lead to the same. What is already clear now is that simply implementing AI into medical workflows will not support that vision because AI is a much more complicated and much less intuitive technology than a stethoscope. It requires a whole new level of knowledge, skills, and a mindset to make it work in our favor.”

If you want a map of where skills wobble first, Chiara Natali, a PhD candidate at the University of Milan-Bicocca, has one. In a recent review, she and her colleagues found that AI threatens some of the most central parts of clinical practice: the hands-on skill of examining patients, the ability to communicate clearly and manage their concerns, the craft of building a differential diagnosis, and the broader judgment that ties it all together.

She also pointed to two vulnerabilities that cut across all those areas. One is uniquely tied to AI: the risk that clinicians either over-trust or reflexively dismiss algorithmic advice. The other is more collective: As teams lean on machines, they risk losing shared awareness, making it harder to spot errors or back each other up when skills start to fade.

“AI doesn’t just extend what clinicians can see (the sensory layer); it also shapes what they are inclined to decide (the decision layer),” Natali says. Tools that “rank differentials, suggest next steps, or pre-fill reports” can nudge deference and “risk eroding the meta-skill of judging when not to follow a recommendation.” 

Can we get lost skills back? “Principles of neuroplasticity suggest ‘use it or lose it’,” Natali says. “And conversely, use it deliberately to regain it.” 

Goh’s hypothesis is that it has less to do with training and education and more to do with “thoughtful design of human computer interactions,” he says. “How do we design an AI product or clinical decision support that fits into the doctor’s existing workflow? How can we introduce visual and other cues that alert a doctor when necessary?” 

He points to practical patterns — triage queues for radiology; traffic-light “safety net” alerts; AI-drafted eConsults that keep the primary care provider (PCP) in charge. “This means that the PCP is still in control, while benefiting from education and awareness about actions he could take sooner on behalf of the patient.”

The Future: Inevitabilities Both Good and Bad

The longer horizon returns to identity. Blease is unapologetically direct. “I believe that over time, some degree of deskilling is inevitable, and that’s not necessarily a bad thing,” she says. “If AI becomes consistently better at certain technical skills, insisting that doctors keep doing those tasks just to preserve expertise could actually harm patients.” 

She warns against a double standard that “holds AI to a far higher standard than we hold human doctors.” And she asks the question most institutions dodge. “We need to start thinking about a post-doctor world. We will need a variety of healthcare professionals who are AI-informed. A wide variety of new roles will emerge including in the training and the testing and ethical oversight of these tools, in curating data sets, and in working alongside these tools, to deliver care and improve patient outcomes.” 

Nearer term, she wants “clear guidance and short, practical training for clinicians,” including “the ethical dimensions of AI: bias, transparency, privacy.”

So where does that leave a practicing internist, surgeon, or gastroenterologist who will be urged (and maybe even compelled) to use more AI over the next 5 years? 

The experts don’t always see eye to eye, but their advice points in a similar direction. Pick the right jobs for the machine first. Use AI ruthlessly to strip administrative drag so scarce human attention can be spent where it matters. Measure reliance and performance; sequence assistance after initial effort; protect deliberate practice without the tool. Expect skills to be redistributed and plan for that in curricula, credentialing, and team design. Design products that keep clinicians in the loop at the right moments and explainability front-and-center. Teach clinicians how to tell patients when they trusted the machine and when they did not.

The best outcome is that AI reshapes medicine on purpose: We choose the tasks it should own, we measure when it helps or harms, and we train clinicians to stay exquisitely human while the machines do scalable pattern work. In that future, clinical judgment is less displaced than redeployed, with physicians spending fewer hours wrestling software and more time making sense of people.

https://www.medscape.com/viewarticle/ai-eroding-cognitive-skills-doctors-how-bad-it-2025a1000q2k

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.