AI Code Governance in Healthcare Isn’t Optional Anymore
There’s a version of this conversation that happened in a lot of engineering stand-ups over the past two years: someone proposes adding an AI coding assistant to the workflow, the team saves time, velocity goes up, and leadership is happy. The compliance officer either wasn’t in the room or wasn’t asked.
That gap is now a liability.
Eighty-four percent of developers use or plan to use AI tools in their workflows, according to Stack Overflow’s 2025 Developer Survey. In healthcare software specifically, that means AI-generated code is already touching systems that store, process, and transmit protected health information. And nearly half of healthcare organizations still have no formal approval process for AI adoption. Only 31 percent actively monitor the AI tools their teams use.
That’s not a technology problem. It’s a governance problem. And regulators are no longer willing to wait for the industry to figure it out on its own.
What Changed: The Regulatory Shift That Made This Non-Negotiable
For most of the past decade, AI governance in healthcare was treated as aspirational. Frameworks existed, best practices circulated, and some organizations took them seriously. Most didn’t. The enforcement mechanisms weren’t sharp enough to force urgency.
That changed in 2024 and 2025.
The HIPAA Security Rule Got Its First Major Update in 20 Years
In January 2025, the HHS Office for Civil Rights proposed sweeping changes to the HIPAA Security Rule, citing the rise in ransomware and the need for stronger cybersecurity across the healthcare sector. For organizations deploying AI, the implications are significant.
The update eliminates the distinction between “required” and “addressable” controls, meaning every covered entity must now implement the same uniform security measures. Mandatory encryption for all electronic protected health information in storage and transit. Continuous monitoring systems instead of periodic reviews. Breach notification timelines tightened from 60 days to 30. And the OCR now explicitly states that the Security Rule governs ePHI used in AI training data and algorithms developed by regulated entities.
If your development team is using AI tools that touch patient data, those tools fall under this framework. Full stop.
Texas Set the Tone for State-Level AI Enforcement
Texas enacted one of the broadest AI laws in the country. The Texas Responsible AI Governance Act, known as TRAIGA, took effect January 1, 2026. Under it, healthcare providers must give patients written disclosure of any AI use in their diagnosis or treatment before the interaction takes place. Violations run from $10,000 to $200,000 per incident, with penalties accruing daily for ongoing non-compliance.
The Texas law followed SB 1188, which became effective in September 2025 and requires practitioners to personally review all AI-generated content before any clinical decision is made.
Texas is not alone. Nearly 20 states introduced bills modeled on Colorado’s AI governance framework in 2025, and California enacted multiple AI-specific laws affecting healthcare, including AB 3030 (generative AI disclosures in patient communications) and SB 1120 (prohibiting AI-only coverage denial decisions).
The Joint Commission Now Has a Framework
In 2025, the Joint Commission and the Coalition for Health AI released practical guidance for healthcare organizations at any stage of their AI journey. The framework covers seven areas: governance policies, patient privacy, data security, ongoing quality monitoring, safety reporting, bias assessment, and staff training. For accredited organizations, this isn’t just guidance. It’s becoming a baseline expectation.
The era of voluntary AI ethics is over. New laws make data quality, transparency, and continuous human oversight non-negotiable in healthcare.
The Problem Hiding Inside Your Dev Pipeline
Most of the attention in healthcare AI governance focuses on clinical tools: diagnostic algorithms, patient-facing chatbots, coverage decision systems. Those conversations are important. But there’s a quieter version of the same problem living inside software development teams right now.
AI coding assistants like GitHub Copilot, Cursor, and similar tools are generating code across healthcare software. They speed up development, reduce repetitive work, and help teams ship faster. None of that is bad. But when those tools operate without governance, the code they produce can create serious compliance exposure that nobody on the team is actively watching for.
What AI-Generated Code Can Get Wrong
AI coding assistants don’t know your compliance obligations. They know patterns. They generate code based on what’s statistically probable given a prompt, not what’s legally required given your regulatory environment. That creates specific categories of risk:
- Logging that captures PHI inadvertently. An AI-generated function that logs request parameters for debugging purposes can easily capture protected health information in ways that violate the HIPAA minimum necessary rule.
- Hardcoded credentials and insecure data handling. AI-generated boilerplate frequently includes patterns that look functional but fail security standards, including improper handling of sensitive data fields.
- Third-party dependencies with unknown security postures. AI tools regularly suggest library imports and integrations without evaluating whether those dependencies have been vetted under your security framework.
- No audit trail. When AI writes a block of code, there’s no inherent documentation of what prompt produced it, what decision logic was applied, or who reviewed it. That creates a gap when an auditor asks how a particular piece of code came to exist.
The Blue Shield of California case is instructive here. Misconfigured code, not a hack, led to the inadvertent sharing of 4.7 million members’ data with Google’s advertising platform between 2021 and 2024. The Comstar breach in May 2025 exposed 585,621 patients’ records after investigators found the organization had failed to conduct a HIPAA-compliant risk analysis for its AI-enhanced systems. Serviceaide’s unsecured database exposed the PHI of 483,126 Catholic Health patients the same month. See how Hoyack has approached these challenges in our AI governance case study.
These aren’t the result of bad intentions. They’re the result of systems operating without governance frameworks.
You’re liable for what your AI does, even if you didn’t write the code.
What AI Code Governance Actually Looks Like in Practice
Governance is one of those words that sounds like it means slowing everything down. In practice, a well-designed governance framework does the opposite. It lets teams move faster because the compliance checks are automated and running continuously, not bunched up at the end of a release cycle.
Here’s what it looks like when it’s working:
Policy as Code
Compliance rules get encoded directly into the CI/CD pipeline. When a developer opens a pull request, automated checks run against the defined policy set before the code is ever reviewed by a human. Non-compliant commits get flagged or blocked immediately, at the point where fixing them is cheap. The same rules apply consistently across every repository, every team, and every AI-generated contribution.
This is fundamentally different from periodic audits, where violations accumulate undetected until someone specifically looks for them.
Audit Trails Tied to AI Outputs
Every piece of AI-assisted code should be traceable to the developer who accepted it, the tool that generated it, and the review process it went through. This matters during HIPAA audits, SOC 2 assessments, and any incident investigation. Artifacts that can be produced on demand, rather than reconstructed after the fact, are the difference between a manageable audit and a protracted one.
Shift-Left Compliance
The shift-left model pushes compliance checks as early in the development process as possible. Rather than evaluating whether a feature is compliant after it’s built, shift-left embeds those checks into the IDE, the commit process, and the pull request stage. Developers get feedback while the context is still fresh, before technical debt has a chance to accumulate.
This also means compliance teams aren’t being handed a completed product and asked to approve it under deadline pressure. They’re operating as a quality layer throughout the process.
Vendor Approval Lists
Not all AI coding tools are created equal from a compliance standpoint. Some have executed Business Associate Agreements and carry SOC 2 Type II certifications. Others haven’t been evaluated at all. A vendor approval list, maintained and enforced at the engineering team level, ensures that developers aren’t inadvertently introducing unapproved tools into workflows that touch regulated data.
This is especially important as AI tools proliferate and individual developers begin adopting tools based on personal preference rather than organizational policy.
What This Means by Role
The governance question looks different depending on where you sit. Here’s how it breaks down:
For CTOs and Engineering Leaders
The immediate priority is visibility. Do you have a complete inventory of AI tools your developers are using? Do you know which of those tools have signed BAAs? Do you know how AI-generated code is being reviewed before it hits production?
If the answer to any of those questions is “not really,” that’s the starting point. The second priority is automation. Manual compliance checks at scale are unreliable. Policy-as-code enforcement, embedded in the CI/CD pipeline, is the only approach that keeps up with the speed at which AI-assisted teams can produce code.
For Healthcare IT and Compliance Officers
The 2025 HHS proposed rule change explicitly requires organizations to include AI systems in their HIPAA risk analysis and risk management activities. This isn’t optional, and it isn’t satisfied by a vendor’s generic SOC 2 report.
The shared responsibility model is critical to understand here. A vendor providing a HIPAA-eligible AI tool is responsible for securing the infrastructure. Your organization is responsible for configuring it correctly, managing access, ensuring proper consent is in place, and monitoring how it’s used. A BAA protects you legally. Proper configuration and ongoing monitoring protect you practically.
For Hospital and Health System Executives
The liability question is unambiguous. Courts and regulators have consistently held that organizations are responsible for what their AI systems do, regardless of whether those systems were built in-house or procured from a vendor. Governance structures that include executive-level oversight, documented policies, and clear accountability are now a legal necessity, not a best practice recommendation.
The Joint Commission framework makes this concrete: there should be a formal governance structure responsible for oversight of AI tools, with a mechanism to keep the organization’s governing body updated on uses, outcomes, and adverse events.
A Starting Checklist for Healthcare Software Teams
Governance programs don’t have to be built in a day, but they do need to start somewhere. These eight steps represent the practical foundation:
- Inventory every AI tool in your development environment and classify each by its risk level, the data it touches, and whether it has been formally approved.
- Verify BAA availability for any AI tool that processes or could access ePHI. If a vendor won’t sign a BAA, that tool cannot legally be used in regulated workflows.
- Confirm SOC 2 Type II certification for AI coding assistants used in healthcare software development. Self-attestation is not sufficient.
- Implement policy-as-code enforcement in your CI/CD pipeline so that compliance checks run automatically on every commit and pull request.
- Establish audit trail requirements that link AI-generated outputs to the developer, the tool, and the review process.
- Build and maintain a vendor approval list and make clear to your development team which tools are approved, which are pending review, and which are off-limits.
- Include AI systems in your annual HIPAA risk analysis. The HHS proposed rule change makes this explicit. Document it.
- Document your governance framework. Auditors will ask. Having written policies, version-controlled rule sets, and evidence of enforcement makes a material difference in how that conversation goes.
The Gap Between “We Use AI” and “We’re Covered”
AI adoption in software development is not slowing down. The tools are too useful, and the competitive pressure to move faster is too real. Healthcare organizations that refuse to let their development teams use AI coding assistants will eventually fall behind. That’s not the argument against governance.
The argument for governance is that organizations that embed compliance into their development process can move fast and stay protected at the same time. The ones that don’t are accumulating technical debt they can’t see and legal exposure they haven’t priced.
The regulatory environment in 2025 and 2026 has made one thing clear: being unaware of what your AI tools are doing inside your codebase is no longer a defensible position. The question isn’t whether your organization needs an AI code governance framework. It’s how long you can afford to operate without one.
Built Compliant from Day One
Hoyack is a SOC 2 certified software development firm that builds healthcare technology with compliance embedded in the process, not retrofitted after the fact. If your team is using AI coding tools without a governance framework, let’s talk about what that exposure actually looks like and how to close it.




