Clinical AI adoption is moving fast. Policy, regulation, and liability frameworks are moving too, but not always in the same direction. Increasingly, health systems are looking for ways to bridge foundational guidance with practical, clinical-grade governance, which is why cross-disciplinary efforts like the Coalition for Health AI (CHAI) are gaining momentum.
Over the past year alone, organizations have faced a growing array of state laws, federal rules, professional guidance, and proposed standards governing AI. Some overlap. Some conflict. Many raise more questions than they answer. What’s becoming clear is that a “wait and see” approach is no longer realistic for health systems deploying AI in clinical workflows.
Health systems are now being asked to innovate while simultaneously managing risk, accountability, and patient safety, often without a single, unified playbook.
One of the core challenges with clinical AI governance is that oversight is emerging from multiple directions, each addressing a different part of the problem. This fragmentation is exactly the gap organizations like CHAI were formed to help bridge: translating high-level policy and ethics into something health systems can actually operationalize.
Today, different institutions are solving different slices of the AI risk equation. The FDA evaluates whether an AI model is safe and effective for its intended clinical use. Professional bodies such as the ACR raise important concerns around bias, generalizability, and appropriate use. Privacy and data protection remain governed by HIPAA and enforced by OCR. Meanwhile, accreditation bodies like The Joint Commission are beginning to examine how AI fits into broader patient safety and quality frameworks.
At the same time, health systems are navigating state-level mandates that introduce new disclosure and transparency requirements, often with different timelines and expectations. For organizations operating across multiple states, this creates real operational complexity.
No single body owns the entire AI governance problem. That fragmentation is not a regulatory failure. It is the reality that health systems must now manage.
This is where the Office of the National Coordinator for Health IT (ONC) comes into play.
ONC’s Health Data, Technology, and Interoperability Rule (HTI-1) represents the federal government’s most concrete step into AI governance at the health IT layer. Rather than evaluating clinical performance, which remains the FDA’s role, HTI-1 focuses on transparency, disclosure, and accountability when AI influences clinical decision-making.
HTI-1 signals a shift in the conversation. The question is no longer just whether an AI model can be used, but whether it can be understood, governed, monitored, and defended once it is deployed in production.
This is where many health systems feel the gap between regulatory intent and operational reality. ONC defines expectations for transparency and governance, but translating those expectations into day-to-day clinical and IT workflows requires shared frameworks, a common language, and repeatable practices.
This is where CHAI plays a complementary role. While ONC sets direction through policy and certification expectations, CHAI helps health systems operationalize those signals through applied tools such as model cards, testing and evaluation benchmarks, and governance processes that can be implemented and sustained over time.
In that sense, ONC and CHAI are not solving the same problem, but adjacent ones. ONC establishes the direction of travel. CHAI helps organizations navigate the path.
Much of today’s AI governance conversation is grounded in strong foundational frameworks.
NIST’s AI Risk Management Framework provides an essential structure for thinking about risk across the AI lifecycle. It establishes shared terminology and a systems-level approach that applies across industries. But by design, it remains voluntary and high-level.
CHAI builds on this foundation by focusing specifically on healthcare. Its Testing and Evaluation Framework moves governance from principle to practice, emphasizing not just whether a model is accurate, but whether it is usable, reliable, and appropriate in real clinical environments.
In practice, this distinction matters. A model may meet foundational requirements for security and integration, yet fail to perform as expected once introduced into a radiology department, ICU, or outpatient workflow. Governance only works when both perspectives are applied together.
Professional organizations such as the ACR have highlighted how AI models can behave differently across populations, modalities, and care settings. Even highly accurate models may surface unintended bias or performance gaps once deployed outside the environment in which they were trained.
This is why local validation, continuous monitoring, and periodic reassessment are becoming central to responsible AI governance. CHAI reinforces this by encouraging health systems to examine performance across age, gender, race, and other relevant factors, rather than relying solely on aggregate accuracy metrics.
Governance is not about slowing innovation. It is about ensuring that innovation is measurable, defensible, and aligned with patient safety.
AI governance is no longer a future concern. It is already shaping procurement decisions, deployment strategies, and executive-level conversations.
FDA oversight, ONC’s HTI-1 rule, NIST frameworks, accreditation expectations, and professional guidance are converging. Each addresses a different dimension of risk, but together they send a clear signal: health systems are expected to understand how their AI works, where it performs well, where it does not, and how those decisions are governed over time.
The challenge is not a lack of guidance. It is the volume and fragmentation of it.
The opportunity for health systems in 2026 is to move beyond reactive compliance and toward proactive, defensible AI governance.
That means treating governance as infrastructure, not a one-time exercise. It means having visibility into AI performance over time. It means creating repeatable processes for evaluation, deployment, monitoring, and reassessment. And it means aligning policy, clinical, technical, and operational perspectives around shared accountability.
This is where collaborative efforts like CHAI become especially valuable, helping organizations connect regulatory expectations with practical execution across diverse clinical environments.
These themes will be explored further in our upcoming webinar on February 19, hosted in partnership with CHAI.
The panel brings together perspectives from policy, clinical leadership, research, AI, and health system operations to discuss how organizations are navigating this moment and what practical AI governance looks like in real clinical settings.
If clinical AI is part of your 2026 roadmap, this is a conversation worth joining.
Register here: https://hubs.ly/Q040Z52P0