If AI is involved in a medical error, who should be held accountable: the clinician or developer? Surgeons, lawyers, and forensic experts have addressed this question, which is expected to become more pressing in the coming years.
Experts converged on a key point: chatbots and other decision support tools are linked to adverse events. The central issue is whether negligence has occurred, and if so, who bears responsibility. However, this question remains complex. AI is reshaping medicine in terms of diagnosis, treatment, patient care, administration, and medical liabilities. Although no clear legal precedent has been established, one point is clear: Chatbots and other decision support tools will lead to adverse events. The key question is whether negligence has occurred, and if so, who is responsible. This remains a complex and potentially daunting prospect.
Legal Balance
Before focusing on errors, experts emphasized the potential benefits of AI, including medical liability.
Speaking with Medscape’s French edition, Thomas Grégory, MD, PhD, head of department at Hôpital Avicenne, AP-HP, in Bobigny, France, and professor of orthopedic surgery and trauma at Université Sorbonne Paris Nord, said “A tool such as Gleamer, which reviewed x-rays and was used across emergency departments within Assistance Publique - Hôpitaux de Paris, had a real clinical impact as well as a legal one because complaints against the institution declined,” said, He has also led digital health and surgical innovation work linked to La Maison des Sciences Numériques.
Although AI may improve safety and reduce litigation, it also raises important questions regarding when errors occur. Legal experts have expressed a consistent view of such situations.
As long as AI is considered an instrument, that is, an object, the physician remains responsible,” said Xavier Labbée, lawyer and professor emeritus at the University of Lille in Lille, France.
“The physician remains responsible for the decision,” said Cécile Manaouil, MD, PhD, forensic expert, and head of the medicolegal unit at Centre Hospitalier Universitaire d’Amiens in Amiens, France. Physicians take ownership of the generated results, make decisions, and retain the legal authors of their clinical rationale. If AI is used, its use must be justified.”
Theory and Practice Gaps
To reduce exposure to legal action, AI should be used in accordance with best practices, including verifying outputs whenever possible and relying solely on validated tools. “When we develop an AI assisted surgical tool, we go through a supervised learning phase, during which we closely monitor data quality,” said Grégory. He added that this approach also helps limit legal risks. “The tool must then pass the rigorous CE marking process, which confirms compliance with EU safety and performance standards and requires clinical studies to demonstrate its relevance.”
He emphasized that, just as biologists use automated systems to streamline procedures, “the ultimate responsibility rests with the physician,” Grégory said.
In practice, assigning responsibility to physicians is challenging. Beyond standard precautions, such as informing patients about the use of an AI tool and protecting personal data, some classic recommendations related to the use of AI are difficult to implement. The final verification of the AI outputs was one example. “AI remains a tool whose results should be verifiable, but if it was used to review the literature on a specific topic and every article then had to be checked, the time saved was lost,” said Manaouil.
“For now, proposals to grant legal personhood to AI have been set aside, but what would happen in 10 or 20 years remain uncertain,” said Labbée.
Another issue is the development of fully autonomous systems that can operate without direct physician involvement. “For now, a clinician still oversees the process, but combining AI-guided surgery with robotic systems, as already being explored in some research settings, would change the landscape,” said Grégory. He added that such systems are feasible and can soon enter clinical practice. In this context, clearly distinguishing physician responsibility from manufacturer liability is essential, as is the case with other automated medical technologies.
In the case of a lawsuit, questions of liability remain complex and continue evolving. “A physician accused of wrongdoing could bring a claim against the system’s manufacturer to establish liability,” said Labbée. “Liability for defective products exists, but proving that a product is defective is difficult in court.”
Legal frameworks may also change, and what applies today may not apply in the future. “The question then becomes who is sued,” he added. “For now, proposals to grant legal personhood to AI have been set aside, but the situation could evolve over the next 10 or 20 years.”
A key message is that clinicians must adapt to shifting expectations. “Today, the focus is on the responsibility of the physician who uses AI, but in the future, attention may shift to the physician who does not use it,” said Manaouil.
https://www.medscape.com/viewarticle/ai-and-medical-errors-who-takes-blame-now-2026a1000bld
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.