The use of artificial intelligence-based outcome prediction models (OPMs) in medicine is on the rise, but a new paper has warned widespread use could lead to unintentional patient harm.
Specifically, it suggests that OPMs – statistical models that predict a certain health outcome based on a patient's characteristics and may be used in challenging treatment cases – can be vulnerable to "harmful self-fulfilling prophecies", even if they are very effective at predicting outcomes.
At the moment, the usual practice is to keep monitoring an OPM's accuracy – its discrimination – to make sure it continues to be effective, but somewhat counter-intuitively the new analysis in the data journal Patterns (PDF) suggests that may not actually be the best approach as it is too simplistic.
For example, if they are trained on data that reflects existing disparities in treatment or demographics, the AI could perpetuate these inequalities, leading to poorer patient outcomes. To guard against that, an element of "human reasoning" must be incorporated into the process.
The team – led by Wouter AC van Amsterdam of University Medical Center Utrecht – used various mathematical models to test their hypothesis and found that, in some cases, the OPM "can lead to harm, even when the predictions exhibit good discrimination after deployment."
The study is timely, according to Professor Ewen Harrison of the University of Edinburgh, a specialist in medical informatics who was not directly involved in the study, as it highlights the risks of AI and computer algorithms unintentionally harming patients by influencing treatment decisions.
He described a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery using characteristics such as age, body weight, existing health problems, and physical fitness.
"Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy. However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes," said Prof Harrison.
"Patients labelled by the algorithm as having a 'poor predicted recovery' receive less attention, fewer physiotherapy sessions, and less encouragement overall. As a result, these patients indeed experience slower recovery, higher pain, and reduced mobility, seemingly confirming the accuracy of the prediction tool."
The authors of the paper suggest that the current approach to prediction model development, deployment, and monitoring "needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcomes."
https://pharmaphorum.com/news/why-relying-ai-outcome-models-may-not-be-good-idea
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.