By Elizabeth McElhiney, Director of Government Affairs and Policy at Verisma
April 30, 2026
If you work in healthcare, you’ve heard the word “AI” more times in the past two years than in the previous decade combined. But behind the buzz, something substantive is happening. Federal regulators are rewriting the rules for health IT, and major healthcare organizations are issuing guidance to help institutions navigate the shift. For health information (HI) professionals, now’s the time to cut through the noise and understand what’s changing – and what it means for your organization.
Two developments deserve your full attention: the proposed HTI-5 rule from the Office of the National Coordinator for Health IT (ONC), and recent guidance from the American Hospital Association (AHA) on responsible AI adoption. Together, they paint a clear picture of where health IT is heading – and what leaders need to do to stay ahead.
Not All AI Is the Same: A Plain-Language Guide
Before diving into regulation, it helps to understand what we’re talking about when we say “AI.” The term gets applied to everything from simple automated workflows to genuinely sophisticated machine learning systems. Here’s a practical breakdown:
- Machine Learning (ML): Algorithms learning from large datasets to identify patterns and make predictions. In health IT, ML powers case prioritization in clinical documentation integrity (CDI), predictive analytics in revenue cycle management, and security anomaly detection. Example: a system flagging high-risk records for review based on historical billing patterns.
- Natural Language Processing (NLP): A subset of ML enabling computers to read and interpret human language. In practice, NLP drives coding assistance tools parsing clinical notes and suggesting diagnosis codes – reducing manual effort and error.
- Robotic Process Automation (RPA): Often mistaken for AI, RPA is rule-based automation – software following fixed instructions to complete repetitive tasks like data entry or form routing. It doesn’t learn or adapt, but it does save significant staff time.
- Generative AI: The newest category to enter mainstream healthcare. These models can draft clinical summaries, respond to patient inquiries, or generate documentation. They carry significant promise, and substantial governance responsibility.
Understanding these distinctions matters because the regulatory and ethical obligations differ across each type. A rules-based RPA tool carries very different risks than a generative AI model influencing a clinical recommendation.
HTI-5: Less Prescription, More Responsibility
The proposed HTI-5 rule represents one of the most significant updates to the ONC Health IT Certification Program in years. Its intent is largely deregulatory: reduce certification burden, modernize interoperability standards, and streamline the path for health IT developers to bring innovative tools to market.
For healthcare organizations, that sounds like welcome news – and in many ways it is. But here’s the catch: fewer federal certification requirements don’t mean less risks. When the regulatory floor lowers, responsibility shifts to organizations themselves. Under HTI-5, your team will need to independently evaluate AI tools for safety and potential bias, ensure algorithms are explainable and auditable, and maintain strong internal governance over any AI or automated system in use.
Information blocking compliance is particularly urgent. AI systems can inadvertently create violations. For instance, if an automated workflow restricts access to electronic health information (EHI) in unintended ways. Enforcement is active and penalties are real. Leaders must ask vendors tough questions: Does your AI learn from our patient data? How is access to EHI logged and audited? Who governs how our data is used? Transparency isn’t optional, it’s a compliance requirement.
AHA Guidance: Governance Isn’t Optional
The AHA has been clear and consistent: responsible AI adoption is a governance challenge, not just a technology challenge. Hospitals and health systems are urged to ensure any AI deployment is strategically aligned with organizational goals, ethically evaluated with bias assessments conducted locally rather than relying solely on vendor testing, validated in the specific deployment environment, and continuously monitored after go-live.
The AHA’s emphasis on keeping humans-in-the-loop is especially important for HI professionals. When AI influences decisions about clinical documentation, revenue cycle, or patient access to records, human oversight isn’t just best practice, it’s an ethical and increasingly a legal requirement. Responsible AI isn’t a one-time project. It’s an ongoing governance model.
What HI Leaders Should Do Now
The convergence of HTI-5 and AHA guidance points to a clear action agenda:
- Develop a clear AI strategy. Know what tools you have, how they work, and what decisions they influence.
- Ask vendors better questions. Go beyond marketing claims. Demand transparency on data governance, model explainability, and audit trails.
- Build multidisciplinary governance. AI oversight should include HI professionals, clinicians, IT, compliance and legal – not a single team working in isolation.
- Embed information blocking compliance. Audit automated workflows ensuring no AI-driven process unintentionally restricts patient access to EHI.
- Stay curious. Terminology and capabilities will keep evolving. Leaders committed to ongoing learning will be best positioned to guide their organizations.
The Bottom Line
The future of health information management isn’t about choosing between humans and machines. It’s about combining human judgment with machine efficiency – responsibly, transparently, and always in service of patients.
HTI-5 and AHA guidance aren’t obstacles to innovation. They’re a framework for doing it right. AI is already embedded in the tools we use every day. The only real question is whether we deploy it intentionally or let it shape us by default.