The American Medical Association is calling on federal lawmakers to enact safeguards on AI chatbots, particularly when it comes to protecting patients’ mental health.
With the rapid rise of people using chatbots for behavioral and other health-related issues, the AMA wrote April 22 to the co-chairs of the Congressional AI and Digital Health caucuses and the Senate AI Caucus, urging stronger regulation of the technology.
“AI-enabled tools may help expand access to mental health resources and support innovation in healthcare delivery, but they lack consistent safeguards against serious risks, including emotional dependency, misinformation, and inadequate crisis response,” AMA CEO John Whyte, MD, stated in a news release. “With thoughtful oversight and accountability, policymakers can support innovation and ensure technologies prioritize patient safety, strengthen public trust, and responsibly complement — not replace — clinical care.”
The AMA’s recommendations include requiring chatbots to clearly disclose that users are interacting with AI, prohibiting them from presenting themselves as licensed clinicians, banning them from diagnosing or treating mental health conditions without regulatory due diligence, clarifying when AI solutions qualify as medical devices, and mandating strict data protection standards.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.