If you read high-profile medical journals, the high-end popular press, and magazines like Science or Nature, it is clear that the medicalization of artificial intelligence, machine learning, and big data is in full swing. Speculation abounds about what these can do for medicine. It’s time to put them to the test.
From what I can tell, artificial intelligence, machine learning, and big data are mostly jargon for one of two things. The first is about bigger and bigger computers sifting through mountains of data to detect patterns that might be obscure to even the best trained and most skilled humans. The second is about automating routine and even complex tasks that humans now do. Some of these could be “mechanical,” like adaptive robots in a hospital, and some might be “cognitive,” like making a complex diagnosis. Others might be a combination of the two, as in the almost-around-the-corner self-driving cars.
The idea of computers sorting through data and detecting patterns is of great interest for analyzing images like mammograms and colonoscopies, and for interpreting electrocardiograms. But is this really transformative or novel? An early version of digital image analysis and facial recognition was proposed by the polymath Francis Galton in the late 1800s. Likewise, machine reading of electrocardiograms has been occurring since at least the 1960s. There are, of course, issues with AI and machine learning like overdiagnosis and misreads, but the narrative is that eventually more data and technology will solve such problems.
Perhaps, though, IBM’s overselling of Watson to use artificial intelligence to identify new approaches to cancer care is a cautionary tale and reminds us that many things in medicine lack fixed rules and stereotypical features, and so will be hard for AI to solve.
Another hope is that AI could somehow rehumanize medicine by improving workflows and replacing the current tidal wave of screen time with face time with patients. Although that could happen, all of the data and associated analytics could also lead to an ever more oppressive version of medical Taylorism and a drive for “efficiency.”
It is possible that technology could free physicians and enhance their interactions with patients, but as the recent move to electronic health records shows, that is far from certain and the economic imperatives of corporate medicine to see more patients, capture more charges, and generate more throughput might just as easily predominate. Regulators will also likely weigh in. And while “Alexa, please refill Mrs. Smith’s statin prescription” seems simple enough, will we — or do we want to — get to “Alexa, please schedule Mrs. Smith with everything she needs for hip replacement”?
I think we need a Turing test for medical artificial intelligence. Such a test, proposed by British mathematician and computer scientist Alan Turing in 1950, can determine if a computer is capable of performing complex functions like a human being. For medicine, the test should be a problem that is currently hard to solve. Here’s one I think would be perfect: create a weight loss plan for patients with severe obesity (a body-mass index of 40 or more) that is as effective as bariatric surgery. This would be a classic non-inferiority trial, in which a new treatment isn’t less effective than one already in use.
Obesity treatment as a test of medical AI has the advantage of an easily measured outcome — all you need is a scale — and a condition that is potentially treatable by one or more interventions. Surgery is effective for sustained weight loss, and there are good data on the most effective surgical approaches. But it isn’t the only option — some people achieve long-term weight loss without surgery. Class 3 obesity is a common condition with plenty of downstream hazards — including increased risk of developing diabetes, heart disease, cancer, and arthritis, as well as trouble with activities of daily living — so the ability to recruit motivated participants for a randomized trial should be relatively easy.
All sorts of data are available that could be fed into “the computers” to generate individualized plans for participants. Beyond simple demographics, the plans could also synthesize genetic data, diet and exercise preferences, and information from wearables. Text messages could be sent to remind people what foods to avoid or when they needed to get in more steps for the day. Shopping for food could be automated, and certain foods and portions sizes at restaurants could be made electronically off limits. Even better, customized menus could be constructed on demand. All of this could be linked to financial incentive programs.
If you really wanted to stretch the limits, cars could be programmed to make it difficult to stop at fast food restaurants. Or some sort of “pre-eating” aversive stimulus could be applied when the algorithm detected signals or subtle behaviors associated with an increased likelihood of excessive eating — depending, of course, on ethical committee approval.
In short, it’s entirely possible to develop a truly comprehensive weight-loss plan.
The fact that genetic data, diet preferences, wearables, and text messages don’t seem to have much impact on long-term weight loss in controlled trials are only minor details. There are also a host of issues with implementing artificial intelligence in the real world. But let’s not get distracted.
Enthusiasts of AI, machine learning, and big data should throw caution to the winds and craft a highly effective alternative to bariatric surgery. Such a demonstration would clearly tip the scales and show the skeptics what medical AI can do.
Or put more simply: It is time for medical artificial intelligence to go big or go home.
Michael J. Joyner, M.D. is an anesthesiologist and physiologist at the Mayo Clinic. The views in this article are his own.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.