Coverage
4/3/2026

Digital Health Delivery Company Solera Tackles AI Governance Issues

Digital Health Delivery Company Solera Tackles AI Governance Issues

Solera has created framework for responsible and transparent use of AI in digital health for use across its partner ecosystem

By David Raths at Healthcare Innovation

Solera Health has created a digital platform that matches health plan members to more than 20 curated digital health solutions. Two of the company’s execs recently sat down with Healthcare Innovation to discuss the company’s business model and growth as well as its approach to AI governance across its digital health partner network. 



Glenn Alphen, Solera’s chief commercial officer, spoke about the company’s founding and growth, and Mike Levin, the company’s general counsel and chief information security officer, described the complexity of developing an AI governance framework across its ecosystem of digital health solution partners.

As an example of the type of partnership it develops with payers, Blue Cross and Blue Shield of Texas just announced its Unity Health Hub, powered by Solera Health, that will link to customer service and condition management resources to provide members with a coordinated experience.

Healthcare Innovation: Could you give us an overview of the company's business model and talk about some of the digital health partners it works with?

Alphen: The company was founded under the Affordable Care Act to serve Medicare Advantage members and to drive them to diabetes prevention programs locally and potentially digitally, and then turn their progress into claims. We started to build a front end, using interviewing techniques to understand the individuals using it. Over time, our commercial customers who also had Medicare Advantage said that there were some digital programs that would be great for their commercial population in weight management and diabetes prevention. Could we do that as well? So we began to build out a model that steered people to those sorts of programs and figured out ways to build those as claims.

We began collecting information on engagement and outcomes. Are you actually losing weight? Are you actually doing the program? We built what is essentially our own EMR, where we keep track of all that data coming in through those partners over time. Now we're at eight conditions.

We have a number of large health plan customers that use all of our condition categories primarily in commercial markets — whether it's fully insured or ASO [Administrative Services Only] sell-through.

When I am at a conference and people ask what we do, I say, ‘See everything in this room? We’re trying to make it easy for an individual to navigate and to take the point solution fatigue away from the health plan or the employer by being the place where a network for digital and virtual care exists, so we're really creating a network approach.’

HCI: Does Solera vet the digital health solutions in terms of their efficacy or trustworthiness? Or do the health plans say to you that they work with a particular company and would like you to make it part of your network?

Alphen: We do have plans say, ‘Hey, we love these guys. We want to make them part of the network.’ But because of our vetting process, it doesn't always happen.We start with clinical vetting. And then there’s business alignment. Do they serve a care path that we already serve or do they serve a new care path? Because that's how we think about it — what’s the appropriate care path? There's a very clinical lens. The trick is they have to agree to more of a pay-for-performance model, which is that matching up of engagement with clinical outcomes. Can they share the data so that we can build a value-based framework around billing? There are different billing methodologies. They are often per member/per month, and that's where a lot of that point solution fatigue comes from. The employers or the health plans are always having to adapt to somebody's new methodology. We clean that up for them, generally speaking.

HCI: Solera just announced a new behavioral health network with companies Calm and Lyra Health. Could you talk about that?

Alphen: Yes. We’ve been very successful in the mental health space with some prior partners. We thought we needed a little bit more of an expansive category, to really meet the way our customers. Calm grabs a lot of attention because of their deep consumer background, but they've launched Calm Health for Employers, which also asks questions about other conditions that we serve. We’ll be able to map some of that data into other offerings that we have. Behavioral health gives us some flexibility to do some more specific offerings. I don't really want to get into what those are yet, but there are other areas that we can go into in behavioral health.

HCI: Let me turn to Mike. I saw some information about Solera unveiling a framework for responsible and transparent use of AI in digital health to be used across your partner ecosystem. Could you first talk about where governance most often collapses once AI goes operational and what effective, enforceable AI oversight needs to look like now in this space?

Levin: You're asking: how do how does AI governance break down? Generally, it's the same things that you see in security. First and foremost, it's inventory drift. Lot of organizations don't even realize that they are using AI specifically in production or that their network partners are utilizing it, so they don't even have a proper inventory of where the AI is actually embedded.

Monitoring atrophy happens quite a bit, particularly when you're building out a governance program. The monitoring cadence starts to drift and the people who are monitoring may not be monitoring continuously, and that becomes a huge risk. The third thing is incident response gaps. When we engage with our payers, this is the one that they are continually asking us about. A pilot doesn't actually surface real incidents because it's very limited in scope. But once you're actually out in the real world, production is very different. When an AI makes a problematic recommendation, how do you respond to it? In a live clinical context, you need an escalation path. You need to be pulling in the proper subject matter expertise. Those have very limited 24- to 72-hour reporting windows as well. More than anything else, the incident response is not really thought through. It has to mirror what you do from a cyber perspective. If there are pre-existing models that exist in security, you can basically copy them over to the AI side.

HCI: Solera is sitting in kind of a unique position at the center of a digital health ecosystem of separate companies. Is this governance framework one you're building to help all those companies as a base you expect them to reach in terms of things like transparency?

Levin: We have a fairly expansive AI governance program for our digital health providers. This is something that we keep being asked about by our payers. There's a lot of anxiety around this, because it's an unknown and there is lots of overlapping and sometimes contradictory guidance around this. We see dual risks. There's the clinical and there's the compliance, and they don't always align. Clinical risk is about patient safety and care quality. Does the AI surface accurate recommendations? Does it hallucinate? Does it perform equitably? If the data that is coming in has bias, the results that come out also has bias. Could it lead to harm if the output is wrong?

Then there's the compliance risk, which is the one that you hear more about from the legal side, and that’s regulatory exposure. Everybody's familiar with HIPAA, but there are all these new laws, particularly in California and Colorado. Washington state has one, too. The FTC is looking like they're going to start enforcing this as well. So there's a lot of fear about the legal risk perspective as well.

We have a cross-functional oversight committee for our AI governance, which has engineering, legal, security, and compliance. Each of them has a unique perspective on the AI problem, if you will. Those perspectives need to work together, because the risks that I identify are not the same risks that the engineering team or the clinical team will see. This is how you have to manage it. The practical reality is that good clinical governance often satisfies the compliance requirements. So if you do one right, it'll often lead to the other. You have to document everything. You need a huge paper trail.

HCI: Are these digital health companies in your network appreciative that you guys are doing this? Is it like you're helping them or is it like you guys are the task masters who are making them do this stuff?

Levin: Well, some of them are less happy than others. We have a range of digital health partners because we have a fairly large portfolio, and some of them are much more mature, and they're able to provide model cards. They're able to explain risk, to explain bias and other things. We have to walk them through this, but by doing that, they actually build out better practices internally.

The part that surprised me more than anything else was how you might think AI is everywhere, but it's really not always being utilized directly in the delivery of care. It's in the back end. It's basically being used for coding or as a copilot in the office, but it's not actually built into a lot of these healthcare apps, because there's so much anxiety around it from a compliance perspective.

HCI: I read that full implementation across the partner network was expected by the end of the third quarter of 2025. Did that stay on schedule?

Levin: There have been some changes in our network, so since that statement we've had some folks join and others have left. But we do have visibility about the AI status across all of our partners. We know the posture of all of them, and we are helping the ones that need the help.

HCI: And Solera is developing an AI maturity scoring capability with interactive dashboards for security and compliance, expected to roll out this year?

Levin: We are working on that as part of our larger Halo platform. It's one of the product features. Think of it as a scoring mechanism for the digital health providers — from a security perspective, as well as from an AI risk perspective. Think of it almost like a credit score.

HCI: That already sounds like a lot, but are there any other big tasks on your to-do list for 2026?

Levin: That is a lot. I'd say that AI probably consumes about 50% of my team’s time from a governance and oversight perspective, because there's so much unknown about it right now, and it's so dynamic. But we’re not alone. I've seen this within the payer ecosystem as well. A lot of the payers have invested fairly heavily in building AI governance teams, and no two of them are the same. They all respond differently. They're all interpreting the regulations differently. If you've seen one AI governance program, you’ve seen one AI governance program.

Latest News

Dig in deeper

3/4/2026

Digital Health Delivery Company Solera Tackles AI Governance Issues

3/3/2026

Major Insurer Launches Health Hub to Simplify Digital Health for Employers