Search This Blog

Sunday, August 6, 2023

When It Comes to Eye Care, AI Couldn't See Straight

 In response to commonly asked patient questions, an artificial intelligence (AI) chatbot gave inappropriate and even potentially harmful medical advice about vitreoretinal disease, according to a cross-sectional study.

Two ophthalmologists determined that the popular chatbot ChatGPT accurately answered only eight of 52 questions about retinal health that were submitted in late January, reported Peter Y. Zhao, MD, of New England Eye Center at Tufts Medical Center in Boston, and colleagues.

Two weeks later, after resubmitting the questions, all 52 responses changed, with 26 responses materially changing: the accuracy materially improved in 30.8%, while the accuracy materially worsened in 19.2%, they wrote in a research letter in JAMA Ophthalmology

opens in a new tab or window.

In recent months, stunning advances in AI have sparked high-level debate about how to best use the technology while preventing it from launching a "Westworld"-style takeover of humanity. On the medical front, teams of clinicians have tested AI chatbots by peppering them with questions about healthcare.

The chatbots have performed fairly well in recent analyses of their responses to questions about cardiac careopens in a new tab or window, appropriately responding to 21 of 25 questions on the prevention of cardiovascular disease,opens in a new tab or window and oncologyopens in a new tab or window, correctly answering 26 of 30 questions about "common oncology topics," with none of the wrong answers considered harmful. One chatbot even did a good jobopens in a new tab or window at making complex diagnoses.

However, the chatbot in the cardiac care study did make something up, which is known as "hallucinating" in AI circles, when it responded that the cholesterol-lowering drug inclisiran (Leqvio) is commercially unavailable. In fact, the FDA approved it in 2021 and it's readily available.

Study co-author Benjamin K. Young, MD, of Oregon Health & Science University in Portland, told MedPage Today that he and his co-authors "hypothesize[d] the inaccuracy rate was very high because retina is a small subspecialty. Therefore, it stands to reason that ChatGPT has fewer online resources to 'learn' from compared to something like heart disease."

The authors also noted that "hallucination generating factually inaccurate responses is a known issue with LLM [large language model]-based platforms but has the potential to cause patient harm in the domain of medical knowledge."

In this study, ChatGPT responded to a question about the treatment options for central serous chorioretinopathy, a condition often exacerbated by corticosteroid use, by advising the user to take corticosteroids.

"Steroids make the condition worse, but the chatbot said you should use steroids to make it better," Zhao told MedPage Today. "That was a complete 180, a completely wrong type of answer."

The chatbot also incorrectly included injection therapy and laser therapy as treatments for epiretinal membrane, though it correctly mentioned vitrectomy as an option.

Young pointed to another new studyopens in a new tab or window in which researchers asked retinal disease-related questions of a newer version of ChatGPT, and found that most responses were "consistently appropriate." While the methodology of this study was different than Zhao and Young's study, Young said it may be a sign that ChatGPT is getting better.

Zhao and colleagues used Google's "People Also Ask" subsection to make a list of commonly asked questions about vitreoretinal conditions and procedures, including macular degeneration, diabetic retinopathy, retinal vein occlusion, retinal tear or detachment, posterior vitreous detachment, vitreous hemorrhage, epiretinal membrane, macular hole, central serous chorioretinopathy, retina laser, retinal surgery, and intravitreal injection, as well as ocular symptoms that could be explained by vitreoretinal disease using the terms "floaters," "flashes," and "visual curtain."

The questions were initially posed to ChatGPT on Jan. 31, 2023. Since ChatGPT is continually updated, the researchers resubmitted the questions on February 13.

Matthew DeCamp, MD, PhD, of the University of Colorado Anschutz Medical Campus in Aurora, told MedPage Today that studies like this have important limitations.

"This study required the entire answer to be accurate. But answers could be entirely accurate, or partly accurate, or completely inaccurate, and not all inaccuracies carry the same importance," said DeCamp, who was not involved in the research. "This study would have been stronger had the researchers also found a way to judge answers from real-life clinicians. Better yet, the researchers could have been blinded to whether an answer came from a real-life physician or a chatbot. There is a risk that the researchers' own biases could have influenced their judgment."

It may be impossible to know why chatbots are producing bad information, he noted, since some "are built on inexplicable models -- so-called 'black boxes' -- that do not or cannot cite actual sources of information."

Moving forward, he said chatbot developers "will need to know how clinicians and patients tend to ask questions, and to be attentive to that issue of differential impact -- the possibility that the chatbot answers questions differently to different people."

"The fact that answers may change over even a short time period for no clear reason is a real concern," he added.

As to whether "Dr. Google" is any better at providing accurate health information, DeCamp pointed to a new studyopens in a new tab or window that compared how Google and ChatGPT answered questions regarding dementia.

"Whereas Google was more current and transparent, [it] required users to sift through commercial information and advertising that could be hard to interpret," he said. "ChatGPT was more conversational in responses, which could help the user experience and hence understanding, but it did not include sources. Comparing chatbots against each other, against other online sources of information, and against humans are all going to be important."

Disclosures

Zhao reported no disclosures.

Young reported support from the NIH, the Malcom M. Marquis, MD Endowed Fund for Innovation, and Research to Prevent Blindness.

DeCamp reported NIH grant funding to his institution to examine the use of AI-based prognostic algorithms in palliative care and from the Greenwall Foundation to examine how patients experience patient-facing chatbots in health systems.

Primary Source

JAMA Ophthalmology

Source Reference: opens in a new tab or windowCaranfa JT, et al "Accuracy of vitreoretinal disease information from an artificial intelligence chatbot" JAMA Ophthalmol 2023; DOI: 10.1001/jamaophthalmol.2023.3314.


https://www.medpagetoday.com/ophthalmology/generalophthalmology/105765

'Conscience' Bills Let Medical Providers Opt Out of Providing a Wide Range of Care

 A new Montana law will provide sweeping legal protections to healthcare practitioners who refuse to prescribe marijuana or participate in procedures and treatments such as abortion, medically assisted death, gender-affirming care, or others that run afoul of their ethical, moral, or religious beliefs or principles.

The law, which goes into effect in October, will gut patients' abilityopens in a new tab or window to take legal action if they believe they didn't receive proper care due to a conscientious objection by a provider or an institution, such as a hospital.

So-called medical conscience objection laws have existed at the state and federal levels for years, with most protecting providers who refuse to perform an abortion or sterilization procedure. But the new Montana law, and others like it that have passed or been introduced in statehouses across the U.S., goes further, to the point of undermining patient care and threatening the right of people to receive lifesaving and essential care, according to critics.

"I tend to call them 'medical refusal bills,'" said Liz Reiner Platt, the director of Columbia Law School's Law, Rights, and Religion Projectopens in a new tab or window. "Patients are being denied the standard of care, being denied adequate medical care, because objections to certain routine medical practices are being prioritized over patient health."

This year, 21 bills instituting or expanding conscience clauses have been introducedopens in a new tab or window in statehouses, and two have become law, according to the nonprofit Guttmacher Institute. Florida lawmakers passed legislation that allows providers and insurers to refuse any health service that violates ethical beliefs. Montana's law goes further, prohibiting the assignment of health workers to provide, facilitate, or refer patients for abortions unless the providers have consented in writing. South Carolina, Ohio, and Arkansas previously passed bills.

Supporters of the Montana law, called the Implement Medical Ethics and Diversity Act, say it fills gaps in federal law, empowering more medical professionals to practice medicine based on their conscience in circumstances beyond abortion and sterilization.

The bill applies to a wide range of practitioners, institutions, and insurers, encompassing just about any type of healthcare and anyone who could be providing it. The exception is emergency rooms, where the federal Emergency Medical Treatment and Labor Actopens in a new tab or window takes precedence.

"We have technology that is pushing the limits of what is maybe ethical, and that is different in everybody's minds," said State Rep. Amy Regier (R), who sponsored the Montana bill. "Having extra protections for people to practice according to their conscience as we continue down that path of innovation is important."

Claims the bill discriminates against patients frustrate Regier, who said it's about protecting healthcare providers. "Because someone has a conscientious objection to a specific service, they should be able to practice that way," she said.

In 1973, federal regulations known as the Church Amendments were implemented after the Supreme Court's Roe v. Wade decision made abortion legal nationwide. Under the Church Amendments, any institution that receives funding from HHS may not require healthcare providers to perform abortion or sterilization procedures if doing so would violate their religious or moral principles. Additionally, providers who refuse to perform these services may not be discriminated against for their decision.

Since then, at least 45 states have enacted their ownopens in a new tab or window abortion conscience clauses, according to the Guttmacher Institute. Of those, only 17 mandateopens in a new tab or window that patients be notified of the refusal or limit the clause's use in the case of miscarriage or emergency.

March 2020 articleopens in a new tab or window in the AMA Journal of Ethics said, "Clinicians who object to providing care on the basis of 'conscience' have never been more robustly protected than today." Legal remedies for patients who receive inadequate care as a result have shrunk significantly, the article said.

But the wave of medical conscience bills introduced in statehouses since that article was published go beyond abortion to include contraception, sterilization, gender-affirming care, and other services. Opponents such as the American Civil Liberties Union (ACLU), Planned Parenthood, and the Human Rights Campaign have been vocal opponents of this trend, criticizing it as a backdoor way to restrict the rights of women, LGBTQ+ community members, and other individuals.

Still, lawmakers across the country insist the right of doctors, nurses, pharmacists, and other medical providers to practice medicine in alignment with their beliefs is being infringed.

Some healthcare practitioners would "just be done" practicing medicine if forced to perform certain procedures such as abortion, Regier said. "That, to me, is what limits patient care."

Many of the most sweeping bills are backed by organizations that have made it their business to promote this "conscience" agenda nationwide, such as the Christian Medical Association, the Catholic Medical Association, and the National Association of Pro-Life Nurses. Other groups launched a joint effort in 2020 with the explicit purposeopens in a new tab or window of advancing state legislation that makes it easier for healthcare providers to refuse to perform a wide range of procedures, including abortion and types of gender-affirming care.

The organizations that started the initiative are the Religious Freedom Institute in Washington, D.C.; an Arizona-based nonprofit called the Alliance Defending Freedom

opens in a new tab or window; and the Christ Medicus Foundation in Michigan. According to its website, the coalition bolsters efforts to pass more sweeping medical conscience legislation, using methods including print and digital media campaign strategies, grassroots organizing, and advocacy. After successes in Arkansas, Ohio, and South Carolina in 2021 and 2022, it turned to Montana and Florida. Regier said there are a "number of different organizations" pushing this type of legislation, including the Alliance Defending Freedom.

Most of these conscience laws are part of an "arsenal" to further social conservatism, and they are often religiously motivated, said Lori Freedman, PhD, a researcher and associate professor at the Bixby Center for Global Reproductive Health at the University of California San Francisco.

Although federal law is meant to ensure people receive lifesaving care in an emergency, Freedman said, there are cases in which patients don't receive the care they should simply because they don't clear the bar of what a facility considers emergent.

While experts warn of the potential patient health consequences of these medical conscience bills, academics say placing a provider's choice over their patient's rights is itself a threat.

"These bills do not protect religious liberty because they make it impossible for people to follow their own religious and moral values in making major decisions," Reiner Platt said.

About one in six patients in the U.S. are treated in Catholic healthcare facilities, according to Freedman. Many of those venues strictly regulate or prohibit certain procedures, such as abortion, but do not necessarily disclose that to patients. As of 2016, more than 25%opens in a new tab or window of hospital beds in Montana were in such facilities, according to the ACLU. Freedman determined through her research that about one-third of people whose primary hospital was Catholic didn't know of its religious affiliation and therefore were unaware of those limitations on their care.

The problem can extend to secular medical institutions, too. According to the AMA Journal of Ethics article, there are no rules requiring a patient be informed a provider is practicing conscientious objection, which means the patient might "unknowingly receive substandard care" and "even be harmed by" the provider's refusals.

"As much as we like to think about these providers and their opinions, so much is determined at a larger, structural level," Freedman said. "Abortion has been stigmatized, marginalized, and constrained," and plenty of hospitals and physician groups have made great efforts to "make a very safe service somehow illegal to provide within their context."

https://www.medpagetoday.com/special-reports/features/105762

NYPD buys top-of-the-line tactical flying tech, explores drone usage to answer 911 calls

 Drone tech is really taking off at the NYPD.

The nation’s largest police department has spent tens of thousands of dollars on top-of-the-line tactical drones so far this year — and is exploring using the new technology to help cops answer 911 calls in the future, The Post has learned.

City records show the NYPD recently put in orders for a number of new aerial bots from Brinc Inc., a Seattle-based company that boasts plans of one day creating drones that will “quickly and effectively” respond to emergencies, even before law enforcement.

The purchase of the Lemur 2 quad-copters — devices equipped with thermal imaging and night vision and designed to serve as the first line of SWAT — is the latest investment in policing tech by Mayor Eric Adams’ administration.

The purchases — as well as a previously unreported multimillion-dollar investment by the NYPD into two social media tracking tools  — is raising eyebrows among lawmakers and privacy advocates who say guardrails could be needed for the department’s new use of policing tech.

“I think those procurements are really concerning,” said Daniel Schwarz, senior privacy and tech strategist with the New York Civil Liberties Union.


New York City Mayor Eric Adams flying a drone
New York City Mayor Eric Adams loosened restrictions on drone use in the Big Apple.
Michael Appleton/Mayoral Photography Office

“It has a chilling effect on people, and we’ve seen that these tools were specifically used to monitor and surveil protest activities,” added Schwarz, noting his concern with the lack of transparency surrounding the use of such technology.

The NYPD’s drone fleet

Adams, who recently loosened restrictions on drone use in the Big Apple, was mulling enlisting a mini-army of the aerial robots to combat the rise in crime last year, The Post reported at the time.

The NYPD pulled the trigger on some new copters this year, shelling out $87,747 on June 6 for Brinc’s Lemur 2 drones, according to city records.

Brinc CEO Blake Resnick told The Post the order was for fewer than 10 of the next-generation devices, which have also been used in Ukraine to scan bombed-out structures amid the ongoing war with Russia and in Turkey after the massive earthquake there earlier this year.

The Lemur 2 drone was released in March.
Courtesy of BRINC

The NYPD also spent $108,000 in unspecified accessories and operational costs over the past few months, according to the records. And the department submitted a $95,000 receipt for drones and vehicles on July 11, though the record lacks any other details.

The NYPD did not answer questions asking about the number of drones purchased, what they were equipped with and how they will be used.

“To safeguard our modern city in a forward-looking world, it is essential to explore ways technology can support the NYPD’s mission,” a police spokesperson said in a statement Sunday.

The department in 2018 started tapping the remote-controlled devices in hostage situations, crowd monitoring and other police business — but the use was sparse under former Mayor Bill de Blasio.

Overall drone use by the NYPD has doubled over the first 15 months of the Adams administration, with cops dispatching the bots in crime-fighting or emergency situations 48 times between January 2022 and March 2023.

That’s compared to just 36 uses recorded in the last 15 months of de Blasio’s tenure.

The department also ramped up its training and testing of the copters, logging 132 flights in Adams’ first 15 months, compared to just 54 similar flights during the same time span at the end of de Blasio’s tenure. 

Lemur 2
Brinc CEO Blake Resnick was inspired to create the drones after the mass shooting in Las Vegas
Courtesy of BRINC

As of last June, the NYPD had 19 drones in its fleet, according to an NYCLU report.

The force recently tested using speakers on drones on July 16 in what it said was a trial of the devices’ “remote-piloted public messaging capabilities,”  but it was unclear if the Lemur 2 was used.

The top-of-the-line drones are designed to smash through windows and push open doors to make sure rooms are clear of dangers inside buildings before cops enter.

They can also create digital 3-D maps of buildings, provide real-time HD video with night vision and thermal scanners and be used by negotiators to communicate with a suspect.

Brinc touted its plans to use the copters as first responders as part of the Lemur 2 rollout, noting it intends to invest “in a safer tomorrow by developing a network of docked drone systems to respond to 911 calls quickly and effectively, moving response time from minutes or hours into seconds while providing the capabilities to enable a de-escalation-first approach to public safety.”

NYPD drones
Advancements in tech could soon have drones helping to answer 911 calls.
Michael Appleton/Mayoral Photography Office

Resnick, 23, said his latest line is more suited for indoor use when cops are responding to hostage situations, active shooters or building collapses.

“The whole purpose of the drone is to get eyes and ears in dangerous places,” said Resnick, adding, “So if you don’t want to send a human being into a dangerous environment, you could send the drone instead.”

‘Drones as First Responders’

While more than 1,400 police departments across the country own drones, the use of the devices as first responders has been sparse.

The most well-known use has been in Chula Vista, CA, where cops have flown 14,000 drone flights to assess human response to emergency calls since 2018, according to the department.

But the practice, dubbed “Drones as First Responders,” appears to be gaining traction as new tech emerges — spurring a warning from the American Civil Liberties Union just last month.

“We’re very concerned that we may be moving toward a future where we find ourselves constantly scanning the skies, seeing drones overhead, and feeling like the eyes of law enforcement are always upon us,” Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy, and Technology Project, wrote in the 10-page report published on July 27.

“That’s no way for anybody to have to live.”

C
Cops can watch live feeds of what the drone is capturing with thermal imaging and night vision.
Courtesy of BRINC

Stanley acknowledged that the tech can serve as an invaluable resource in dangerous situations for cops, but warned of privacy issues and potential abuse as federal regulations lag compared to the advancement in the devices and their use.

It’s not clear if the NYPD has concrete plans to use drones to respond to 911 calls in the future, but Councilwoman Kamillah Hanks (D-Staten Island), chair of the public safety committee, confirmed that the department is exploring the option.

“The NYPD is currently exploring the enhanced use of drones to assist officers who respond to 911 calls, as well as to help provide notifications during public emergencies and immediate help to struggling swimmers at beaches,” Hanks told The Post.

“I will work with the Mayor’s office and the NYPD to make sure the technology is deployed responsibly.”

Tech investments under Mayor Adams

The NYPD has also invested heavily in the social media monitoring program, Dataminr — shelling out $4.3 million to the company over Adams’ 19 months in office, city records show.

That’s compared to the roughly $7 million total the NYPD paid the company in the five years prior to Adams taking office.

“The Department has used both unmanned aircraft systems and social media analysis tools for several years,” the NYPD spokesperson said in the statement Sunday.

The force was criticized in 2019 for using Dataminr to monitor activist social media accounts during Black Lives Matters protests.

Police Commissioner Edward Caban
NYPD Commissioner Edward Caban announcing the rollback of drone restrictions last month.
Michael Appleton/Mayoral Photography Office

The NYPD also signed a smaller deal, at just over $165,000, with ShadowDragon, a company that collects information on suspects by scanning websites.

“Bad guys also share too much online. Use that against them with publicly-available social media information,” the company’s website states.

Schwarz said one “big concern” of his was the lack of info on the companies’ accuracy in tracking and interpreting social media.

“Unfortunately, where they’re completely in the dark, I have not seen any independent auditing of those systems,” he said, adding, “that in an ideal world, we would have full transparency and could make a could have meaningful discussions about whether those tools have a place here in our government, with our law enforcement departments.”

State Sen. Jessica Ramos (D-Queens) scoffed at the tech and price tag associated with its use.

“Taxpayer dollars spent on surveillance are better spent on fully funding housing plans and education, both proven to actually prevent crime,” she said.

https://nypost.com/2023/08/06/nypd-exploring-use-of-drones-to-answer-911-calls-as-it-buys-high-tech-tactical-bots/