Search This Blog

Wednesday, January 7, 2026

Work Clinically and Ethically With Chatbots and AI

 Hi. I’m Art Caplan. I’m at the Division of Medical Ethics at NYU Grossman School of Medicine in New York City. 

I’m getting an interesting question from many doctors from different specialties, and also from more primary care people. How do I work clinically and ethically with chatbots and artificial intelligence? They’re not asking about making appointments or handling data behind the scenes. They want to know, in dealing with patients, how do I do this and do this right? 

Are people doing it? At least one survey I found said that 70% of the physician respondents said they were using chatbots to help them in their clinical decision-making.

Let me say first that I’m not sure that the chatbot world is ready for use clinically. At best, what I think it can be right now is a supplement, almost like a curbside consult to get another opinion. 

The databases the chatbots have available to them about many health issues are not accurate. The best ones are usually firewalled, and some are proprietary. The chatbots pick up all kinds of things floating around on the internet that are vetted or not vetted, true or not, and gurgle them back when asked a question. It’s not that you can’t look as an ethical physician, but you have to be careful. 

Remember, too, just like with a curbside consult where you’re getting informal advice from another doctor, you are liable. You are responsible, ultimately, for the diagnosis and recommendation of treatment, not the company that made the chatbot.

That’s the overall picture. I would say it’s generally useful as a supplement or an adjuvant, but not as a substitute. It shouldn’t be used that way. 

There are a couple of ethical rules. First, get informed consent if you’re going to use it. Be transparent with your patients about how it’s going be used.

Second, make sure that the information is private. Many of these AI chatbot companies don’t have privacy. They weren’t intended to be used clinically when they formed them, so you have to be sure that anything you share with the chatbot in the way of personal information is kept private.

Overall, I think it’s still early days for, if you will, AI MD. Yes, there’s information that helps. It’s great for reminders. It’s great for bringing up things that you might have overlooked or forgotten in making a diagnosis. 

No, it can’t be the final word to rely on because that’s still a doctor’s job still and that’s your responsibility. If we’re going to move forward with this world, which I think we are, then consent and privacy are absolutely the ethical norms that have to be in place.

https://www.medscape.com/viewarticle/how-work-clinically-and-ethically-chatbots-and-ai-2025a1000zv0

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.