The challenge is not to build the algorithm — the Stanford team simply picked an off-the-shelf tool already on the market — but rather to determine how to carefully integrate it into already-frenzied clinical operations.
“The hardest part, the most important part of this work is not the model development. But it’s the workflow design, the change management, figuring out how do you develop that system the model enables,” said Ron Li, a Stanford physician and clinical informaticist leading the effort. Li will present the work on Wednesday at a virtual conference hosted by Stanford’s Institute for Human-Centered Artificial Intelligence.
The machine learning model Li’s team is working with analyzes patients’ data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps — such as prompting a nurse to check in more frequently or order tests — that would ultimately help physicians make decisions about a Covid-19 patient’s care.
Nearly 50 health systems — which cover hundreds of hospitals — have been using the model to identify hospitalized patients with a wide range of medical conditions who are at the highest risk of deterioration, according to a spokesperson for Epic. The company recently built an update to help hospitals measure how well the model works specifically for Covid-19 patients. The spokesperson said that work showed the model performed well and didn’t need to be altered. Some hospitals are already using it with confidence, according to the spokesperson. But others, including Stanford, are now evaluating the model in their own Covid-19 patients.
In the months before the coronavirus pandemic, Li and his team had been working to validate the model on data from Stanford’s general population of hospitalized patients. Now, they’ve switched their focus to test it on data from dozens of Covid-19 patients that have been hospitalized at Stanford — a cohort that, at least for now, may be too small to fully validate the model.
“We’re essentially waiting as we get more and more Covid patients to see how well this works,” Li said. He added that the model does not have to be completely accurate in order to prove useful in the way it’s being deployed: to help inform high-stakes care decisions, not to automatically trigger them.
As of Tuesday afternoon, Stanford’s main hospital was treating 19 confirmed Covid-19 patients, nine of whom were in the intensive care unit; another 22 people were under investigation for possible Covid-19, according to Stanford spokesperson Julie Greicius. The branch of Stanford’s health system serving communities east of the San Francisco Bay had five confirmed Covid-19 patients, plus one person under investigation. And Stanford’s hospital for children had one confirmed Covid-19 patient, plus seven people under investigation, Greicius said.
Stanford’s hospitalization numbers are very fluid. Many people under investigation may turn out to not be infected, and many confirmed Covid-19 patients who have relatively mild symptoms may be quickly cleared for discharge to go home.
The model is meant to be used in patients who are hospitalized, but not yet in the ICU. It analyzes patients’ data — including their vital signs, lab test results, medications, and medical history — and spits out a score on a scale from 0 to 100, with a higher number signaling elevated concern that the patient’s condition is deteriorating.
Already, Li and his team have started to realize that a patient’s score may be less important than how quickly and dramatically that score changes, he said.
“If a patient’s score is 70, which is pretty high, but it’s been 70 for the last 24 hours — that’s actually a less concerning situation than if a patient scores 20 and then jumps up to 80 within 10 hours,” he said.
Li and his colleagues are adamant that they will not set a specific score threshold that would automatically trigger a transfer to the ICU or prompt a patient to be intubated. Rather, they’re trying to decide which scores or changes in scores should set off alarm bells that a clinician might need to gather more data or take a closer look at how a patient is doing.
“At the end of the day, it will still be the human experts who will make the call regarding whether or not the patient needs to go to the ICU or get intubated — except that this will now be augmented by a system that is smarter, more automated, more efficient,” Li said.
Using an algorithm in this way has potential to minimize the time that clinicians spend manually reviewing charts, so they can focus on the work that most urgently demands their direct expertise, Li said. That could be especially important if Stanford’s hospital sees a flood of Covid-19 patients in the coming weeks. Santa Clara County, where Stanford is located, had confirmed 890 cases of Covid-19 as of Monday afternoon. It’s not clear how many of them have needed hospitalization, though San Francisco Bay Area hospitals have not so far faced the crush of Covid-19 patients that New York City hospitals are experiencing.
That could change. And if it does, Li said, the model will have to be integrated into operations in a way that will work if Stanford has several hundred Covid-19 patients in its hospital.
‘Human experts will make the call’: Stanford launches an accelerated test of AI to help care for Covid-19 patients
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.