Agentic AI is arriving in health systems before anyone has agreed on how to contain it — whether to deploy narrow, task-specific agents or broader autonomous use cases, and how to build meaningful oversight into both.
Health system IT leaders told Becker’s they’re experimenting with early frameworks and weighing what needs to happen before they give AI agents — which plan, act and execute without a human initiating the interaction — more autonomy than they do today.
“AI does not independently diagnose, order, or treat — it can strongly recommend and prebuild orders, but a clinician must click ‘accept,'” said Richard Zane, MD, chief medical and innovation officer of Aurora, Colo.-based UCHealth. “The limit is driven by regulation, liability, and culture, not by what the models can already do.”
UCHealth is mobilizing agentic AI across administrative documentation, flow, revenue cycle and scheduling, but with strong guardrails in place: full audit logs, real-time performance dashboards, strict permissions, no access to ordering — and an immediate kill switch.
“Agents can essentially start in a shadow-like mode, then ‘earn’ autonomy only when proven accurate,” Dr. Zane said. “Clinically, AI can listen, summarize, suggest and nudge. Humans still sign every order and note … for now.”
Yale New Haven (Conn.) Health has an AI agent to answer, classify and close IT service desk tickets, and is exploring the technology for the call center, employee-facing document archives such as human resources policies, and knowledge base article generation. The organization also plans to launch an agent factory, such as Microsoft Copilot Studio.
“We require human-in-the-loop for all clinical decision and diagnostic support tools,” said Lee Schwamm, MD, senior vice president and chief digital health officer of Yale New Haven Health. “In operations and administration processes, we require extensive human-in-the-loop validation, but if the models perform as expected and exceed benchmark thresholds, then we will allow them to execute autonomously with periodic audits.”
He said cybersecurity concerns surrounding AI agents need to be addressed, like the potential to violate role-based access controls.
“The greatest opportunity to improve health will be found in patient-facing agents that can actually deliver healthcare,” Dr. Schwamm said. “But the immature liability frameworks and risk of patient harm from unanticipated actions are major barriers to that next evolution.”
Health systems that have scaled agentic AI have similarly done so largely in operations and revenue cycle. Leaders are still setting boundaries on the clinical side.
“The moment you’re touching anything that influences a care decision, the physician owns that outcome,” said Kathy Azeez-Narain, chief digital and customer innovation officer of Newport Beach, Calif.-based Hoag Health System. “I arrived at that line not from regulation alone but from a deep respect for the humanity required in patient care and for what it actually means to be accountable to a patient.”
The two-hospital network hasn’t deployed agentic AI yet, and Ms. Azeez-Narain doesn’t believe healthcare is ready for the technology.
“I’d be skeptical of health system leaders who claim full agentic deployment today, because the oversight infrastructure required to do that responsibly doesn’t broadly exist yet,” she said. “We’re focused on building the foundation — including the audit trails, the confidence thresholds, the legal and compliance frameworks built into design — so if and when agentic capabilities mature and regulatory clarity catches up, we can move with credibility.”
Internal-facing agents to help staffers complete partial or whole administrative tasks are the “most palatable” starting point for agentic AI in healthcare, said Omkar Kulkarni, vice president and chief innovation and transformation officer of Children’s Hospital Los Angeles.
“Broadly, what will most rapidly enable our industry to safely adopt and utilize agentic AI is collaboration,” he said. “When health systems can share best practices around agentic adoption — and, where appropriate, performance and accuracy data — we collectively strengthen the conditions for safe, efficient, and ethical deployment.”
Mr. Kulkarni pointed to KidsX, an innovation consortium of over 25 pediatric health systems that he leads, as an example of a place where these learnings can be shared.
Cincinnati-based Christ Hospital Health Network is taking a “crawl-walk-run” approach to agentic AI, said Joy Oh, chief information and digital transformation officer.
She said successful AI deployment requires a “self-sustaining audit and governance framework, an ROI-based prioritization methodology, and an AI-trained, ready workforce” — but health systems have historically underinvested in these areas because of competing resources and priorities.
“Until these components are in place, I would be hesitant to implement any agentic AI functionality, especially in our highly regulated, patient-centric environment,” Ms. Oh said.
https://www.beckershospitalreview.com/healthcare-information-technology/ai/kill-switches-guardrails-the-raging-debate-over-healthcare-ai-agents/