Coverage
11/3/2026

Getting Wiser About WISeR: What CMS’s AI Fraud Crackdown Means for Hospitals

Getting Wiser About WISeR: What CMS’s AI Fraud Crackdown Means for Hospitals

CMS is piloting an AI-driven prior authorization model to curb Medicare fraud. Here’s what to know about navigating the WISeR program and protecting patient access.

By Aneeta Mathur-Ashton at US News

Federal officials are deploying artificial intelligence to review and potentially reject the use of what they classify as “wasteful, low-value” medical services in an effort to limit excessive Medicare spending around procedures that yield little clinical benefit.

The Centers for Medicare and Medicaid Services introduced the Medicare Wasteful and Inappropriate Services Reduction (WISeR) model in January to scrutinize some 17 kinds of services that CMS concludes are either overused or historically have ties to waste, fraud and abuse.

Mike Levin, general counsel and chief information security officer at the digital healthcare technology company Solera Health, says hospitals will want to pay attention to how the pilot program – administered in six states that participate voluntarily – flags services. He spoke with U.S. News in February. The interview has been edited for length and clarity.

Where can AI have the most immediate and measurable impacts when it comes to reducing fraud, waste and abuse in hospital operations without creating more administrative burdens?

AI tools are very good at pattern detection. Most transactions are not fraudulent, so this is where they can actually be very helpful.

AI can help reduce waste by shifting from a back-end review at the very end – reviewing the claims – toward being done at the front-end and helping people find care pathways that are better fits for their situations.

Some physicians are concerned that aggressive fraud detection could inadvertently restrict access to necessary care. How can AI models be designed and governed to avoid this?

So the first principle is non-negotiable: Human clinical authority must always be preserved. AI can serve us information, can flag patterns and can generate recommendations. But any coverage determination has to come back to a human: a licensed clinician must review it.

There's an old IBM presentation from 1971 that says a machine can never make a management decision because a machine cannot be held accountable. I feel like that's more applicable now than ever 50-plus years later.

What safeguards should health systems and AI companies be putting in place to ensure AI-driven decisions are equitable for all patients?

The word ‘equity’ means different things to different people, so you have to start with the training data. AI systems learn from data that they're fed. If your training data is skewed toward certain demographics, which, historically, healthcare data often is, the biases of the previous data will be fed into the models, and you have to be aware of it.

Your explainability is the other area that you have to focus on. You need to have a paper trail for how decisions are made. A lot of times, some of these new AI solutions just have faith that the model is correct, but they can’t actually show the logic. Our policies here require model cards that document the intended use, limitations, known biases and error rates. If you can't explain it, you can't defend it – and that's not equitable.

Is there a sense that CMS will expand WISeR if the test period is successful?

If the rates of fraud detection increase, we would expect an expansion of WISeR after this test period. However, they appear to be low based on feedback to date.

How can health systems proactively prepare for a broader adoption of models like this?

Like any CMS program, you have to understand what's in the scope. There's not a lot of guidance from a compliance perspective around this, and that's generally true within AI. There are a lot of competing standards right now, so it's in this weird period where everybody's figuring out what's appropriate and what's not.

Second, you have to consider whether you have the infrastructure to actually provide for alternative methods of care. WISeR is going to flag certain things as unnecessary or potentially fraud, waste and abuse, so you have to be able to provide alternatives.

And third is the compliance track record. You’ll have to be able to show what you did and why decisions were made.

What metrics or performance indicators should hospital CEOs be watching right now to see if they should adopt a similar model that keeps their physicians accountable?

WISeR basically requires three days for a standard request. I think that's actually going to impact delays in some care. So one of the metrics I would focus on is appeal volumes and also approval rates as well.

The other question is ultimately about clinical outcomes. If WISeR is working appropriately, it should actually improve clinical outcomes. If it's actually identifying fraud, waste and abuse and preventing approvals for items that may be identified as those, then it should be driving patients toward better care, and there should be improved clinical outcomes.

There are also some questions around bias and equity indicators. You’ll have to look at what WISeR is rejecting and whether it is disproportionately shifting access to vulnerable populations.

What practical steps can health system leaders take to kind of build and maintain that trust among clinicians and patients as AI continues to be more embedded in the healthcare industry?

When it comes to trust in AI, it's always the same answer: transparency. Patients deserve to know when AI is contributing to their care. You have to make sure that they understand where AI is involved.

You also have to see AI as a tool that augments clinicians – not replaces them.

Latest News

Dig in deeper

3/11/2026

Getting Wiser About WISeR: What CMS’s AI Fraud Crackdown Means for Hospitals

3/4/2026

Digital Health Delivery Company Solera Tackles AI Governance Issues