By Mike Levin, General Counsel & CISO

When I joined Solera Health as CISO and General Counsel, I knew AI governance needed to be a priority from day one.

Across the industry, the pattern was the same. Marketing teams experimenting with content generation tools. Customer success testing summarization features. Sales curious about AI for meeting notes. Engineers using code completion tools to accelerate development. Everyone with good intentions. No one thinking about what happens when sensitive business information ends up in a third-party training dataset, or when a regulator asks how you're preventing biased health recommendations from reaching members.

We built managed tenants, established approval processes, and developed education around what was actually at stake. Not to slow innovation. Because in healthcare, getting AI wrong isn't just a security incident. It's legal exposure, regulatory action, and potential patient harm.

The Regulatory Patchwork

Healthcare organizations now face a maze of competing AI standards. California's AB 3030 requires disclosure when generative AI contributes to patient communications. Colorado's algorithmic discrimination law creates liability for AI systems that produce biased outcomes. Washington State's Office of the Insurance Commissioner issued guidance requiring insurers to document AI governance programs and demonstrate that AI-driven decisions don't result in unfair discrimination. States are moving at different speeds with different approaches.

President Trump's December 2025 Executive Order attempts to address this fragmentation by establishing an AI Litigation Task Force to challenge "onerous" state laws and directing federal agencies to develop preemptive national standards. The policy goal is clear: replace 50 different regulatory regimes with a single federal framework.

For healthcare organizations, this creates strategic uncertainty. Do you build governance to satisfy the most demanding state regimes knowing they may be preempted? Or do you wait for federal clarity and risk being out of compliance in the interim? The practical answer is governance flexible enough to adapt while rigorous enough to satisfy current requirements.

Here's what most CISOs miss: this isn't just a compliance tracking exercise. It's a liability question. When AI produces a biased recommendation or hallucinates clinical guidance, the inquiry won't just be "were you HIPAA compliant?" It will be "what governance did you have in place, and did you follow it?" That's how plaintiffs' lawyers think. That's how regulators think. Your AI governance framework isn't just a policy document. It's your legal defense.

Shadow AI Vigilance

Getting everyone onto managed tenants was step one. Keeping them there is the ongoing challenge.

Well-intentioned employees will find new SaaS AI tools. A project manager discovers a meeting summarization app. Someone in finance tries an AI assistant for spreadsheet analysis. A developer hears about a new code generation tool from a conference. None of them are trying to create risk. They're trying to be more productive.

This is why shadow AI vigilance has to be continuous, not a one-time cleanup. You need network monitoring to detect unauthorized AI services. You need a culture where people ask before experimenting. And you need an approval process fast enough that employees don't feel like they're being blocked from doing their jobs.

The Middle Path

I've seen organizations ban AI entirely. No ChatGPT, no copilots, no generative tools. It feels safe but handicaps your teams. Competitors are using AI to accelerate development and reduce costs. Blocking it doesn't eliminate the risk. It pushes AI underground where you have zero visibility.

I've also seen organizations look the other way. Let people experiment. Trust smart employees. This is how sensitive data ends up in a third-party training dataset and you're having an uncomfortable board conversation.

The middle path is governance as operational infrastructure. At Solera, that means:

  • An AI Governance Committee with Legal, Security, Compliance, Engineering, Product, and Clinical leadership that meets routinely and reviews every high-risk use case.
  • Risk-based classification. High-risk applications require human oversight, documented review, and audit-ready records.
  • Technical controls that enforce policy. Managed tenants so we know exactly what tools are in use. Approval workflows before anyone deploys a new AI capability. Monitoring for drift and bias in production models.
  • Oversight of our Digital Health Provider network. We're accountable not just for our own AI, but for what our partners deploy to members. We evaluate their governance posture, review their model documentation, and require alignment with our standards.

The Panel

On February 3rd at Cyber Disrupt 2026 in New York, I'm joining Tom Reagan from Marsh and Michael Srihari from Microsoft to discuss AI Security in 2026. Dave Neuman is moderating.

The discussion will focus on how organizations are defining ownership for AI security, distinguishing between protecting AI systems versus protecting the business from AI decisions, and what CISOs have learned from deployments that didn't go as planned.

If You're in Healthcare

The window to get this right is closing. Regulations are multiplying. Liability frameworks are taking shape. Organizations that wait for regulatory certainty will be playing catch-up to those who built adaptable governance now.

If you're at Cyber Disrupt, find me after the AI Security panel at 3:15pm. If not, reach out to Solera. We've built governance specifically for healthcare AI risk and operationalized it across our network of digital health partners. Happy to share what we've learned.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.