Numerous artificial intelligence (AI)-powered systems for diagnosing and monitoring Alzheimer's disease and other dementias are now authorized by the FDA, but how these systems were developed -- and therefore how they may or may not perform in important patient subgroups -- is difficult to see from the publicly available information, researchers found.
Among 24 such systems (called devices in FDA parlance) authorized since 2015, FDA summaries for 14 lacked data on the participant sets used for training, and there was no information on validation sets for 22, according to Krista Y. Chen, MPH, of Johns Hopkins University School of Medicine in Baltimore, and colleagues.
Peer-reviewed journal articles filled in the gaps for some of these data, the group reported here at the Alzheimer's Association International Conference (AAIC) and in a JAMA research letter published simultaneously. But these provided information on only five systems' training data and on 10 for validation sets. Overall, for fewer than half of these clinical tools could Chen and colleagues examine even the most basic characteristics in the training and validation sets.
For 23 of the systems, no information on participants' race/ethnicity was available from either source, and a majority lacked data on parameters including age, sex, and disease status. Furthermore, "no justification was provided" for these omissions, Chen and colleagues wrote in JAMA. The one system for which a racial breakdown of training set members was available (it was 90% white) did not have one for the validation set.
These issues are important because Alzheimer's and related dementias do not manifest equally in all demographic groups. "Lack of transparency raises concerns about algorithmic bias -- or underperformance, underdiagnosis, and inequitable care recommendations for underrepresented populations -- impacting care planning," the researchers explained.
Such non-transparency makes it hard to determine whether these systems adhere to recent FDA guidance calling for "demographic representativeness" in developing AI- and machine learning-based medical devices. It also indicates that manufacturers aren't routinely following FDA guidance for reporting such data, compliance with which is technically voluntary. (The agency's zeal for pushing such guidance may also have diminished under the Trump administration's efforts to eliminate race-cognizance in research and policy; the most recent document was published a week before current FDA Commissioner Marty Makary, MD, MPH, took office.)
In her talk at AAIC, Chen noted that women and Black and Hispanic patients face "uneven burdens" in dementia care, including later diagnosis, less access to medications, more frequent hospital admissions and complications, and less palliative care. Consequently, "transparency of datasets is essential to understanding performance variation and appropriate application," she said.
But the study showed that opacity seems to rule the day for these systems. "This lack of transparency regarding datasets raises uncertainty about real-world generalizability and clinical accuracy of these devices in their intended populations," Chen said.
Disclosures
No external funding for the study was reported.
Chen declared she had no relevant financial interests.
The study's senior author reported relationships with Alosa Health and Sunday Health. One co-author reported serving as a deputy editor at JAMA (but disclaimed any role in reviewing or accepting the manuscript) and also reported a relationship with Arnold Ventures and serving as an expert witness in a lawsuit alleging that Biogen violated federal law. Another author reported a relationship with Linus Health.
Primary Source
JAMA
Source Reference: Chen KY, et al "Demographic data supporting FDA authorization of AI devices for Alzheimer disease and related dementias" JAMA 2025; DOI: 10.1001/jama.2025.12779.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.