Article

Medicare Advantage Plans Brace for Sweeping 2025 CMS Audit and Payment Rule Changes

  • June 11, 2025

CMS Tightens Oversight of Medicare Advantage Plans

In the coming year, the nation’s Medicare Advantage insurers – which cover over 31 million Americans – face an unprecedented wave of regulatory changes and scrutiny. The Centers for Medicare & Medicaid Services (CMS) has quietly ushered in a more aggressive audit regime for Medicare Advantage (MA) plans, alongside significant updates to how these plans are paid for the health risks of their enrollees.

Health plan CEOs, whose organizations collectively received about $455 billion in Medicare payments last year, are now grappling with what these changes mean operationally and financially. Many are preparing for a future in which annual federal audits become a routine part of doing business and risk adjustment rules are rewritten to curb excess payments.

Oversight Intensifies: RADV Audits Expand in 2025

Late this spring, CMS announced a dramatic expansion of its Risk Adjustment Data Validation (RADV) audits – the primary tool for verifying that MA plan payments are justified by members documented health status. Historically, CMS audited only a small sample (around 60) of MA contracts each year, targeting plans suspected of excessive billing. That is changing effective immediately: CMS will audit all eligible Medicare Advantage contracts annually (approximately 550 plans in total)1. In addition, the agency is fast-tracking a backlog of past years’ audits, pledging to complete all outstanding audits for payment years 2018 through 2024 by early 2026. This means health plans could be hit with multiple audit findings in short succession, condensing what might have been a decade of scrutiny into a much shorter window.

“We are committed to crushing fraud, waste and abuse across all federal healthcare programs,” Dr. Mehmet Oz, the CMS Administrator, said in a statement announcing the new audit strategy. While emphasizing the value of Medicare Advantage, Oz underscored that CMS must ensure [plans] are billing the government accurately2.

The RADV audits themselves will also become more intensive. CMS is increasing the sample size of medical records it reviews for each plan from about 35 records to as many as 200 records per plan annually1. By reviewing a larger slice of each plan’s claims, CMS aims to make any identified error rates more credible for extrapolation – a process of projecting the sample’s error rate onto the plan’s entire member population1. CMS finalized a rule in 2023 that, for the first time, allows auditors to extrapolate overpayment findings starting with audits of 2018 claims onward. In the past, if an audit uncovered (for example) $100,000 in improper payments in the sample, the plan would repay that amount; now CMS can multiply that figure across all similar cases in the year – a change that could turn modest audit findings into multimillion-dollar liabilities for plans.

To support this ambitious oversight agenda, CMS is bolstering its audit arsenal. The agency will deploy “enhanced technology” – including advanced data analytics, and potentially artificial intelligence, to flag suspect diagnoses in billing data1. It is also undertaking a massive workforce expansion, increasing its team of medical coders from just 40 to roughly 2,000 by September 2025 to manually review records and confirm unsupported codes2. This 50-foldstaffing surge underscores the scale of CMS’s commitment. All Medicare Advantage plans can now expect an audit each year, a stark departure from an era when many insurers never faced a RADV audit at all1.

For health plans, the immediate implication is a significant operational burden. Insurers will need to respond to ongoing documentation requests, often under tight deadlines, and may find themselves in perpetual audit preparation mode. Some plans are already ramping up their own internal audit teams and processes to mirror CMS’s efforts, aiming to catch and correct errors proactively before federal auditors arrive.

A Revamped Risk Adjustment Model and Policy Changes

Behind the audit crackdown is a broader effort to refine how risk adjustment – the system that pays more for sicker patients – is administered. In 2024, CMS began phasing in a new risk adjustment model (known as “V28”) for Medicare Advantage, the first major overhaul in years. This updated model recalibrates which diagnoses count toward a patient’s risk score and how much they raise payments. Notably, CMS removed over 2,000 diagnosis codes from the model that it deemed prone to being “up-coded” – the practice of documenting extra or more severe conditions to inflate payments3. The goal is to target codes most likely to be abused and ensure that payments better reflect genuine health status.

The transition to the new model is occurring gradually to mitigate disruption. For payment year 2024, risk scores were calculated with a blend (33% new model, 67% old model). By 2025, the balance flips to 67% new model (V28) and 33% old4, and by 2026 the new model will be fully in place. The V28 model introduces 115 condition categories (up from 86 in the previous model) but with a more selective set of diagnosis codes – 7,770 codes mapping to those categories, versus 9,797 codes in the old model4. In practical terms, some diagnoses that used to boost payments will no longer do so, or will do so to a lesser degree. Chronic conditions like diabetes, depression, or vascular disease are among those seeing coding criteria tightened or subdivided to prevent overstating a patient’s illness burden, according to policy analysts.

CMS argues these changes will improve payment accuracy and curb excess spending. Agency officials noted that Medicare Advantage plans have been paid billions more than similar patients in traditional Medicare, partly due to aggressive coding practices. Indeed, CMS now estimates MA plans overbill the government by about $17 billion a year through unsupported diagnoses, with some estimates as high as $43 billion. The new risk model, coupled with stepped-up audits, is designed to rein in this overspending. Med PAC, a congressional advisory body, has reported that payments to MA plans in 2024 were on track to be roughly $83 billion higher than they would have been in fee-for-service Medicare for the same enrollees – a gap these policies seek to narrow.

Health plans and providers, however, have voiced concern about the speed and impact of these changes. The industry pushed back hard when the new model was proposed, prompting CMS to adopt the three-year phase-in rather than an immediate switch3. Many insurers and health systems fear the model’s stricter coding could reduce payments for vulnerable patients, potentially affecting benefit offerings. CMS’s own projections suggested that despite the model changes, average plan payments per enrollee would still rise in 2024 and 2025, due to other adjustments. But those increases may be smaller than plans are used to, and impacts will vary byplans3.

The American Medical Group Association, representing provider organizations, cautiously noted that the phase-in gives CMS “an opportunity to refine the plan” if unintended consequences emerge by 2026. In essence, while regulators see the new model as a needed course correction, the industry sees a potential budget cut in disguise, to be fought or at least closely watched.

Operational and Compliance Challenges for Health Plans

For health plan executives, the confluence of comprehensive audits and new risk scoring rules translates into a daunting compliance agenda. Operationally, plans must strengthen their documentation practices and IT systems immediately. Every diagnosis code submitted for payment must be backed by proper medical record evidence – not just to withstand a CMS audit, but to ensure the plan isn’t overstating its risk scores under the refined model. Many insurers are conducting internal RADV-style audits on 2018–2022 data right now, essentially red-flagging any diagnosis in their system that might not hold up to scrutiny. By performing these self-audits and deleting or correcting unsupported codes in CMS’s database, plans can mitigate future penalties4. This proactive approach, encouraged by consultants, aims to “reduce and manage RADV financial exposure” by addressing issues before the government does.

Provider engagement is another critical piece. Medicare Advantage insurers often rely on networks of physicians and hospitals to document diagnoses, and historically some have incentivized providers to code comprehensively. Now the dynamic is shifting: plans are implementing new provider training and education on the V28 coding changes, stressing accurate and only supported diagnoses. Some plans are also revisiting their contracts with providers. Those that share risk with providers (through value-based arrangements or bonus incentives) may insert clauses making providers financially liable for coding errors that lead to audit recoveries. If a CMS extrapolated audit claws back millions of dollars from a plan, the plan doesn’t want to shoulder that alone – it may seek to recover portions from the physician groups whose documentation was found lacking. This is a delicate conversation, but it reflects how seriously plans are treating the new audit risk.

Internally, compliance and audit departments at MA organizations are bracing for a heavier lift. Plan CEOs are evaluating whether their teams have the bandwidth and expertise to handle continuous audit requests, or if they need to enlist outside help (such as specialized auditing firms or consulting partners). The administrative load of responding to RADV audits – pulling hundreds of medical records from archives, coding them, and submitting rebuttal evidence – is significant, especially for smaller regional plans. Plans must also keep pace with evolving guidance: CMS recently issued updated RADV audit dispute and appeal instructions (effective January 2025), clarifying how plans can challenge audit findings through a reconsideration process2. Ensuring the legal team is ready to navigate these appeals, especially when extrapolated sums are on the line, will be crucial.

Finally, IT systems need updates to accommodate the 2025 risk model blend and forthcoming full model transition. Claims and billing software must incorporate the new HCC definitions so that as of January 1, 2025, incoming claims are evaluated under the correct risk adjustment logic. Misalignments here could directly affect revenue projections and compliance. Some plans have had to reconfigure analytics dashboards and retrain their coders and coding vendors on the model’s nuances – for example, which codes no longer map to an HCC (and thus no longer increase payments)4. This system work is technical, but vital to avoid errors in submissions that could trigger audits or payment shortfalls.

Financial Stakes and Industry Response

The financial implications of CMS’s 2025 changes are multifaceted. On one hand, Medicare Advantage insurers might see lower revenue growth per patient as risk scores level off under the tighter model. On the other hand, they face the possibility of paying back substantial sums if audits uncover past overpayments. Even a small error rate can translate into a large liability when extrapolated across tens or hundreds of thousands of members. Past RADV audits (2011–2013) found overpayments in the range of 5% to 8%2. If a similar error rate were found today and extrapolated, a mid-sized plan with $1 billion in annual revenue might have to refund $50–$80 million for a single year – a heavy hit to earnings.

Compounding the concern, CMS’s decision to finalize audits from 2018 through 2024 in one burst means some plans could be writing checks for multiple years’ worth of overpayments almost at once. Financial officers are reviewing reserves and worst-case scenarios now. “If CMS identifies and extrapolates overpayments for those years, financial losses due to recoupment will be concentrated over a much shorter time period than under the prior timetable,” the Ropes & Gray analysis cautioned1. In other words, what might have been staggered as a series of smaller repayments over a decade could become a tidal wave of obligations around 2025–2026. This has implications for plan budgeting, dividend plans, and even market valuations – indeed, stock analysts have begun asking public MA insurers about their audit exposures in earnings calls.

Preparing for Change: Mitigation Strategies for Plans

In response to these challenges, savvy health plans are taking a multi-pronged approach to mitigate risk. One key strategy is investing in advanced analytics to identify coding outliers. Plans are leveraging data algorithms to scan claims for patterns – for example, providers who code unusually high rates of certain lucrative diagnoses – and then conducting targeted chart reviews to verify those cases. By doing so, plans can either validate the codes with proper documentation or proactively “unlock” and remove unsupported diagnoses from their submissions, thereby inoculating against future audit findings. This kind of internal cleanup, though potentially reducing payments in the short term, can save a plan from a costly claw-back down the road. Several large insurers have created special RADV task forces for this purpose, blending expertise from compliance, IT, and clinical coding teams.

Education and training are also front and center. Health plan leaders are doubling down on provider education programs to reinforce documentation standards. For example, physicians are being reminded that every chronic condition must be explicitly documented each year in the medical record to count for risk adjustment – and if they add a diagnosis, it should be one actively managed or treated, not just noted in passing. Plans are updating provider handbooks to reflect diagnoses that no longer risk-adjust under the new model, so clinicians don’t waste effort coding conditions that won’t contribute to funding. Some plans are even offering or requiring “documentation integrity” training sessions for network providers, knowing that many audit issues can be prevented at the point of care through better record-keeping.

Another defensive measure is incorporating more stringent audit clauses in vendor contracts. Many health plans use third-party vendors for chart reviews or in-home assessments to help identify additional diagnoses. In the wake of the RADV rule, plans are making sure those vendors attest to the accuracy of codes they submit on the plan’s behalf – and assume liability if codes don’t hold up in an audit. Similarly, plans in risk-sharing arrangements with providers are clarifying how any recovered payments will be handled, as noted earlier. The overarching aim is to align incentives so that everyone – plan, provider, vendor – has “skin in the game” to only report truthful, supportable diagnoses.

From a financial planning perspective, some insurers are bolstering reserves or reinsurance coverage to cushion against possible repayments. Just as importantly, they are scenario-testing the impact of lower risk scores. CFOs are running models on 2025 revenue under various coding intensity assumptions (for instance, if certain common diagnoses drop out of HCC scoring) to guide bids and benefit design for the upcoming plan year. In extreme cases, a few plans have hinted they might need to trim benefits or adjust premiums if the new model significantly undercuts their payments – a move that would likely invite member and political backlash. For now, most are taking a wait-and-see approach, hoping that improved documentation and coding accuracy can blunt the negative financial impacts.

Navigating the Changes with Technology and Support

As Medicare Advantage organizations brace for this new regulatory landscape, many are turning to technology and specialized support services to adapt more effectively. Digital operations platforms and analytics tools are emerging as essential aids in ensuring compliance without overwhelming internal teams. For example, some health plans are deploying AI-driven software to automatically review medical records for any discrepancies between documented conditions and submitted diagnosis codes. These tools can flag potential unsupported diagnoses in real time, allowing plans to correct errors before they are picked up in a CMS audit. Enhanced reporting systems also help plans continuously monitor their risk score trends under the new model and identify areas where scores are dropping due to the V28 changes – insight that can inform provider outreach and member care programs.

Mizzeto’s healthcare digital operations suite is designed to streamline back-office processes for payers, which now include the heavy compliance workloads. For instance, Mizzeto provides audit and compliance assistance, conducting transactional audits to ensure policy compliance and quality control. Such services can take on the labor-intensive task of reviewing claims and medical records for accuracy, effectively augmenting a health plan’s internal audit department. Mizzeto also specializes in claims processing automation and data management, which helps plans keep their billing accurate and up-to-date with the latest rules. By automating routine claims checks and integrating the new risk adjustment logic into claims workflows, these technologies reduce the chance of human error that could lead to audit findings.

Another area where external partners prove valuable is in financial reconciliation and provider recovery efforts. If a plan does end up owing money back to CMS or identifies overpayments made to providers, Mizzeto’s services include analyzing overpayment situations and even helping to recoup excess payments from providers in the plan’s network. This kind of support is critical when plans are processing the results of an audit or adjusting payments post-review. It ensures that once a compliance issue is identified, the plan can resolve it swiftly on the financial side – whether that means correcting claims, retrieving funds, or crediting CMS – all with minimal disruption to operations.

Crucially, these solutions are not about replacing human expertise but augmenting it. Health plan executives remain at the helm in setting strategy (such as how to respond to CMS rule changes or when to self-audit), but they are leveraging technology and trusted partners to execute those strategies at scale. The result can be a more resilient organization: one that can handle an uptick in audits and shifting payment formulas without sacrificing focus on member care.

Looking ahead, Medicare Advantage plans will continue to refine their approach as real-world data from 2025 rolls in. Early audit results and the first full year of the new risk score model will provide feedback, showing where coding patterns need improvement or which compliance investments yield the best returns. Health plan CEOs are keenly aware that the stakes are high – both in terms of dollar amounts and public trust. Yet, with thorough preparation, the right expertise, and strategic use of technology, plans can navigate these reforms. The overarching goal is aligning Medicare Advantage’s impressive growth with robust accountability. And while the 2025 CMS audit changes pose undeniable challenges, they also present an opportunity: for health plans to demonstrate their commitment to accuracy and quality, strengthening the partnership between the government and private insurers that millions of seniors rely on every day.

1CMS Announces Significant Changes to RADV Auditing Efforts: Considerations and Next Steps for the Medicare Advantage Industry

2CMS Rolls Out Aggressive Strategy to Enhance and Accelerate Medicare Advantage Audits

3Providers, payers press CMS to get rid of Medicare Advantage risk adjustment changes entirely

4Key Areas of Focus for Risk Adjustment as the Calendar Turns to 2025

Latest News

Latest Research, News , & Events.

Read More
icon
Article

AI Data Governance - Mizzeto Collaborates with Fortune 25 Payer

AI Data Governance

The rapid acceleration of AI in healthcare has created an unprecedented challenge for payers. Many healthcare organizations are uncertain about how to deploy AI technologies effectively, often fearing unintended ripple effects across their ecosystems. Recognizing this, Mizzeto recently collaborated with a Fortune 25 payer to design comprehensive AI data governance frameworks—helping streamline internal systems and guide third-party vendor selection.

This urgency is backed by industry trends. According to a survey by Define Ventures, over 50% of health plan and health system executives identify AI as an immediate priority, and 73% have already established governance committees. 

Define Ventures, Payer and Provider Vision for AI Survey

However, many healthcare organizations struggle to establish clear ownership and accountability for their AI initiatives. Think about it, with different departments implementing AI solutions independently and without coordination, organizations are fragmented and leave themselves open to data breaches, compliance risks, and massive regulatory fines.  

Principles of AI Data Governance  

AI Data Governance in healthcare, at its core, is a structured approach to managing how AI systems interact with sensitive data, ensuring these powerful tools operate within regulatory boundaries while delivering value.  

For payers wrestling with multiple AI implementations across claims processing, member services, and provider data management, proper governance provides the guardrails needed to safely deploy AI. Without it, organizations risk not only regulatory exposure but also the potential for PHI data leakage—leading to hefty fines, reputational damage, and a loss of trust that can take years to rebuild. 

Healthcare AI Governance can be boiled down into 3 key principles:  

  1. Protect People Ensuring member data privacy, security, and regulatory compliance (HIPAA, GDPR, etc.). 
  1. Prioritize Equity – Mitigating algorithmic bias and ensuring AI models serve diverse populations fairly. 
  1. Promote Health Value - Aligning AI-driven decisions with better member outcomes and cost efficiencies. 

Protect People – Safeguarding Member Data 

For payers, protecting member data isn’t just about ticking compliance boxes—it’s about earning trust, keeping it, and staying ahead of costly breaches. When AI systems handle Protected Health Information (PHI), security needs to be baked into every layer, leaving no room for gaps.

To start, payers can double down on essentials like end-to-end encryption and role-based access controls (RBAC) to keep unauthorized users at bay. But that’s just the foundation. Real-time anomaly detection and automated audit logs are game-changers, flagging suspicious access patterns before they spiral into full-blown breaches. Meanwhile, differential privacy techniques ensure AI models generate valuable insights without ever exposing individual member identities.

Enter risk tiering—a strategy that categorizes data based on its sensitivity and potential fallout if compromised. This laser-focused approach allows payers to channel their security efforts where they’ll have the biggest impact, tightening defenses where it matters most.

On top of that, data minimization strategies work to reduce unnecessary PHI usage, and automated consent management tools put members in the driver’s seat, letting them control how their data is used in AI-powered processes. Without these layers of protection, payers risk not only regulatory crackdowns but also a devastating hit to their reputation—and worse, a loss of member trust they may never recover.

Prioritize Equity – Building Fair and Unbiased AI Models 

AI should break down barriers to care, not build new ones. Yet, biased datasets can quietly drive inequities in claims processing, prior authorizations, and risk stratification, leaving certain member groups at a disadvantage. To address this, payers must start with diverse, representative datasets and implement bias detection algorithms that monitor outcomes across all demographics. Synthetic data augmentation can fill demographic gaps, while explainable AI (XAI) tools ensure transparency by showing how decisions are made.

But technology alone isn’t enough. AI Ethics Committees should oversee model development to ensure fairness is embedded from day one. Adversarial testing—where diverse teams push AI systems to their limits—can uncover hidden biases before they become systemic issues. By prioritizing equity, payers can transform AI from a potential liability into a force for inclusion, ensuring decisions support all members fairly. This approach doesn’t just reduce compliance risks—it strengthens trust, improves engagement, and reaffirms the commitment to accessible care for everyone.

Promote Health Value – Aligning AI with Better Member Outcomes 

AI should go beyond automating workflows—it should reshape healthcare by improving outcomes and optimizing costs. To achieve this, payers must integrate real-time clinical data feeds into AI models, ensuring decisions account for current member needs rather than outdated claims data. Furthermore, predictive analytics can identify at-risk members earlier, paving the way for proactive interventions that enhance health and reduce expenses.

Equally important are closed-loop feedback systems, which validate AI recommendations against real-world results, continuously refining accuracy and effectiveness. At the same time, FHIR-based interoperability enables AI to seamlessly access EHR and provider data, offering a more comprehensive view of member health.

To measure the full impact, payers need robust dashboards tracking key metrics such as cost savings, operational efficiency, and member outcomes. When implemented thoughtfully, AI becomes much more than a tool for automation—it transforms into a driver of personalized, smarter, and more transparent care.

Integrated artificial intelligence compliance
FTI Technology

Importance of an AI Governance Committee

An AI Governance Committee is a necessity for payers focused on deploying AI technologies in their organization. As artificial intelligence becomes embedded in critical functions like claims adjudication, prior authorizations, and member engagement, its influence touches nearly every corner of the organization. Without a central body to oversee these efforts, payers risk a patchwork of disconnected AI initiatives, where decisions made in one department can have unintended ripple effects across others. The stakes are high: fragmented implementation doesn’t just open the door to compliance violations—it undermines member trust, operational efficiency, and the very purpose of deploying AI in healthcare.

To be effective, the committee must bring together expertise from across the organization. Compliance officers ensure alignment with HIPAA and other regulations, while IT and data leaders manage technical integration and security. Clinical and operational stakeholders ensure AI supports better member outcomes, and legal advisors address regulatory risks and vendor agreements. This collective expertise serves as a compass, helping payers harness AI’s transformative potential while protecting their broader healthcare ecosystem.

Mizzeto’s Collaboration with a Fortune 25 Payer

At Mizzeto, we’ve partnered with a Fortune 25 payer to design and implement advanced AI Data Governance frameworks, addressing both internal systems and third-party vendor selection. Throughout this journey, we’ve found that the key to unlocking the full potential of AI lies in three core principles: Protect People, Prioritize Equity, and Promote Health Value. These principles aren’t just aspirational—they’re the bedrock for creating impactful AI solutions while maintaining the trust of your members.

If your organization is looking to harness the power of AI while ensuring safety, compliance, and meaningful results, let’s connect. At Mizzeto, we’re committed to helping payers navigate the complexities of AI with smarter, safer, and more transformative strategies. Reach out today to see how we can support your journey.

February 14, 2025

5

min read

Feb 21, 20242 min read

Article

What the 2026 CMS Rules Mean for Health Plans

Your UM director just told you the team averaged 8.5 days on standard prior auths last quarter. You nodded, made a note, moved on. In six months, that number becomes a regulatory violation.

For years, health plans have complained about prior authorization burdens: opaque decisioning, variable outcomes, slow turnaround, escalating provider frustration. Half-hearted automation efforts and hybrid analog-digital processes made the problem more visible without solving it.

CMS is now codifying expectations in a way that forces every payer to face reality: the way prior authorization has been done cannot survive 2026.

The changes coming from the CMS Interoperability and Prior Authorization Final Rule aren't incremental technical requirements. They're operational inflection points that will expose long-standing design flaws in prior authorization and utilization management. Leaders who wait until enforcement deadlines will find themselves reacting. Those who act now can redesign the system itself.

The Question Leaders Should Be Asking

Most plans are asking: "What do we have to do to comply with CMS by 2026?" That's a tactical question.

The strategic question is: How do we redesign our prior authorization engine so it performs at the speed, transparency, and explainability levels CMS expects without burning clinical resources, inflating costs, or fragmenting operations?

Checking boxes gets you compliant. Redesigning the system gets you competitive.

What Actually Changes in 2026

Beginning January 1, 2026, CMS moves prior authorization from operational best practice to regulatory mandate.

Under the Interoperability and Prior Authorization Final Rule (CMS-0057-F), impacted payers including Medicare Advantage, Medicaid managed care organizations, CHIP, and certain Qualified Health Plans must comply with several non-negotiable requirements1:

72-hour turnaround for expedited prior authorization requests

Seven calendar days for standard requests

Specific, actionable denial reasons included with every adverse determination

Public reporting of prior authorization metrics including approval rates, denial rates, and average processing times beginning March 31, 2026.2

FHIR-based APIs to support electronic prior authorization workflows and expanded data access

 

Here's what this means in practice:

 

These aren't tweaks to existing workflows. They introduce enforceable timelines, public transparency, and standardized data exchange that most legacy UM environments were never built to support.

What This Exposes Inside Most Plans

The 2026 requirements don't create new operational weaknesses. They expose existing ones.

You Can't Hire Your Way to 72-Hour Compliance

If your prior authorization process depends on manual triage, inconsistent intake validation, or batch review cycles, meeting 72-hourand seven-day mandates becomes structurally challenging. Missed SLAs are no longer internal performance issues. They become regulatory violations.

The constraint is workflow design, not headcount. Adding clinical reviewers may temporarily reduce queue depth, but it doesn't eliminate intake latency, fragmented decision logic, or rework loops that consume days before cases reach clinical evaluation.

Your Denial Rates Are About to Become Public

For the first time, denial rates and processing times will be publicly reported beginning March 31, 2026.2

Plans with high denial rates, particularly those with elevated appeal overturn percentages, will face scrutiny from regulators, providers, and beneficiaries. Appeal overturn rates that were previously internal quality metrics become public signals about determination consistency.

Denials frequently reversed on appeal start looking less like utilization management discipline and more like systematic dysfunction.

Unstructured Intake Creates SLA Risk

Any workflow relying on fax, email attachments, or unstructured documentation creates intake uncertainty. Under 2026 mandates, that uncertainty translates directly into SLA exposure. What was operational inconvenience becomes regulatory vulnerability.

When requests arrive incomplete or in unstructured formats, the clock has already started but clinical review cannot. Days get consumed in follow-up and clarification before actual determination work begins.

Policy Fragmentation Becomes Audit Risk

Medical policies in PDFs. Coverage criteria configured separately in UM systems. Benefit rules embedded in claims engines.

When these layers diverge, denial rationale becomes inconsistent. Inconsistent rationale fuels appeals. Appeal patterns become public metrics tracked by CMS and visible to your provider network.

The 2026 rule requires "a specific reason for adenial"1 in a manner that allows providers to understand what additional information or clinical criteria would result in approval. Fragmented policy governance makes this level of specificity difficult to maintain consistently across thousands of determinations.

API Implementation Without Operational Alignment Fails

FHIR-based Prior Authorization APIs are mandated under the final rule1, but successful implementation requires more than technical connectivity.

These APIs demand structured, standardized data; clear mapping of coverage rules; real-time status tracking; and determination traceability. Treating API implementation as a technical bolt-on without aligning internal policy logic and workflow orchestration creates compliance on paper but operational brittleness in practice.

Reporting Infrastructure Will Strain Multiple Teams

Public reporting requires consolidated, accurate, reconcilable data. The rule requires payers to publicly report metrics including prior authorization decisions, denial reasons, and turnaround times2.

Most plans currently track these metrics across multiple systems: intake portals, UM platforms, claims engines. Without centralized reporting architecture, compliance becomes a manual reconciliation exercise rather than an automated output.

What Forward-Thinking Plans Are Doing Differently

The plans that will meet and leverage the 2026 expectations approach the problem differently.

They Treat Prior Authorization as a System, Not a Function

Rather than thinking in terms of "PA teams" or "PA tech stacks," they define a unified decision pipeline: intake →policy → decision → evidence → reporting. Every component must be architected for speed, traceability, and defensibility.

They Engineer Intake for Decision Readiness

Systems that treat intake as a validation and structuring event, not just data capture, dramatically reduce downstream review time. When requests arrive complete and structured, decisions get smarter and faster.

If a significant portion of requests require follow-up for missing clinical documentation, days burn before clinical review even starts. Fixing intake fixes throughput.

They Govern Policy and Logic Centrally

If policy resides in PDFs, disparate tools, and tribal knowledge, automation fails. Aligning policy logic, configuration, and deployment is the prerequisite for defensible, explainable decisions that meet CMS transparency expectations.

Centralized policy governance ensures reviewers apply consistent standards across all determinations, directly impacting appeal rates and public reporting metrics.

They Accelerate FHIR API Adoption Strategically

Forward-leaning plans are adopting FHIR Prior Authorization APIs now, enabling electronic request and response, reducing provider friction, and establishing a foundation for real-time decisioning rather than batch processing.

This isn't just compliance theater. It's infrastructure for the next decade of utilization management.

The 2026 Budget Reality

Most organizations' instinctive reaction to tighter SLAs is staffing expansion. Consider what that investment looks like:

Clinical hiring: Expanding nurse reviewer teams to handle faster turnaround requirements

Reporting resources: Staff to reconcile metrics across systems for public reporting compliance

API implementation: Technical infrastructure for FHIRPA API deployment and provider integration

Policy governance: Often unfunded, leading to continued fragmentation and appeal exposure

The alternative is investing in redesigning the decision pipeline itself. Structured intake, centralized policy logic, and automated workflow orchestration reduce review burden while improving consistency. The ROI isn't just compliance. It's operational leverage.

What This Means for Your 2026 Planning

If you're treating this as a compliance checklist, you're already behind. This is a fundamental redesign of how utilization management operates.

By Q2 2025: Audit your current PA workflow end-to-end. Identify where time gets consumed: intake validation, clinical review queues, policy lookup, documentation rework, peer-to-peer scheduling. Measure your actual turnaround distribution, not averages.

By Q3 2025: Centralize policy governance. Map coverage criteria to decision logic. Ensure clinical reviewers are applying consistent standards that can withstand public scrutiny and audit review.

By Q4 2025: Implement structured intake that validates completeness before requests enter clinical queues. Stand up reporting infrastructure that consolidates metrics in real time.

By Q1 2026: Conduct dry runs of public reporting. Simulate 72-hour expedited workflows under peak volume. Validate FHIR API functionality with key provider groups.

The plans that redesign now won't just comply. They'll operate with structural advantage.

How Mizzeto Helps Plans Redesign for 2026

We built Smart Auth after years of working inside health plan operations, seeing firsthand where prior authorization workflows breakdown. It's designed to make prior authorization decision-ready from intake through policy application and final determination.

Smart Auth structures data at intake, aligns policy logic centrally, and supports the traceability required for timely decisions and transparent reporting. It enables defensible, explainable determinations at the speed CMS expects without requiring massive clinical hiring or fragmented point solutions.

In 2026, prior authorization performance won't be judged internally. It will be measured, reported, and compared publicly. The question isn't whether to redesign. It's whether you start now or spend 2026 firefighting compliance gaps while your metrics become part of the public record.

 

Footnotes

  1. CMS     Interoperability and Prior Authorization Final Rule (CMS-0057-F).     Available at: https://www.cms.gov/newsroom/fact-sheets/cms-interoperability-and-prior-authorization-final-rule-cms-0057-f  ↩2 ↩3
  2. CMS     Interoperability and Prior Authorization Final Rule Informational Session,     January 17, 2024. Available at: https://www.cms.gov/files/document/cms-interoperability-and-prior-authorization-final-rule-informational-session-january-17-2024.pdf  ↩2 ↩3

Jan 30, 20246 min read

February 16, 2026

2

min read

Article

Over 80% of Prior Auth Denials Get Overturned. Auto Approvals Should Have Prevented Most of Them.

Physicians and their staff complete an average of 39 prior authorization requests per week. They spend roughly 13 hours processing them.¹ When requests get denied, more than 80% of those denials are partially or fully overturned on appeal, meaning the care was appropriate all along.²

That is not a utilization management program working as intended. That is a system generating unnecessary friction, burning clinical resources, and producing decisions it cannot defend.

Auto approvals were supposed to fix this. Route the obvious cases through automatically. Free up clinical reviewers for complex decisions. Cut turnaround times. Reduce provider abrasion.

Most health plans tried some version of this approach. Few got the results they expected.

The Auto Approval Promise vs. the Auto Approval Reality

The math behind auto approvals is straightforward. If a significant share of prior authorization requests are routine, policy aligned, and destined for approval anyway, why route them through manual clinical review?

The problem is execution. In a recent KFF analysis of Medicare Advantage data, denial rates ranged from 4.2% at Elevance Health to 12.8% at UnitedHealth Group.² Those rates might seem low. But when over 80% of denied requests are overturned on appeal, the real story becomes clear: plans are denying care they will ultimately approve, just with extra steps, extra cost, and extra delay.

According to the CAQH Index, only 35% of medical prior authorizations are conducted fully electronically.³ Manual prior authorization transactions cost providers $10.97 each. Fully electronic ones cost $5.79, roughly half. For payers, the gap is even wider: $3.52 per manual transaction versus five cents for a fully electronic one.⁴

Auto approvals should be eating into these costs. For most plans, they are not.

Why Auto Approvals Keep Failing Inside Legacy Workflows

The failure mode looks the same everywhere. A health plan bolts an auto approval layer onto a prior authorization workflow that was designed for manual review. Every request enters the same intake funnel. Data arrives incomplete or inconsistently structured. Clinical documentation lands as bulk attachments, hundreds of pages of chart notes that no automation can parse reliably.

Under those conditions, auto approval rules get conservative fast. Exceptions multiply. Edge cases pile up. The system cannot distinguish between a straightforward imaging request that matches policy criteria and a complex surgical case requiring genuine clinical judgment. So it sends both to manual review, because it cannot trust its own inputs.

The result: auto approvals exist on paper but barely dent the queue. Reviewers still touch most cases. And the 40% of physician practices that now employ staff exclusively to handle prior authorization paperwork¹ see no relief.

Three specific blockers keep this pattern locked in place.

Intake is broken. Requests arrive via fax, portal, phone, and EDI, often missing required fields. When the system cannot confirm a request is complete, it cannot auto approve it. We wrote about this problem in detail in our piece on modernizing UM intake. The front end is where most prior authorization delay actually begins.

Policy logic is fragmented. Medical policies live in PDFs. Clinical criteria are configured in the UM platform. Benefit rules sit in the claims system. No single source of truth exists for “should this request be approved?” When three systems disagree, the default answer is always manual review.

Nobody owns the auto approval rate. UM owns clinical appropriateness. IT owns the platform. Compliance owns regulatory exposure. No single executive is accountable for the percentage of requests that bypass manual review, so nobody optimizes for it.

What Decision Ready Prior Auth Actually Looks Like

The health plans getting auto approvals right are not buying better automation. They are fixing the preconditions that make automation possible.

That means restructuring intake so requests arrive complete and policy aligned before any decision logic runs. It means centralizing medical policy so criteria are applied consistently, not interpreted differently by different reviewers on different shifts. And it means surfacing clinical evidence in context: extracting the three data points that matter for a routine imaging request, rather than dumping a 200 page chart into a reviewer’s queue.

This is the approach behind Mizzeto’s Smart Auth. Instead of asking “can this request be auto approved?” Smart Auth asks “is this request decision ready?” That distinction matters. A decision ready request has complete data, matches a known policy pathway, and meets explicit criteria thresholds, so the system approves it with confidence, not with crossed fingers.

CMS is pushing the entire industry in this direction. Under the Interoperability and Prior Authorization Final Rule (CMS 0057 F), impacted payers must respond to standard prior authorization requests within seven calendar days and expedited requests within 72 hours, effective January 1, 2026.⁵ Plans must also publicly report prior authorization metrics, including approval rates, denial rates, and average turnaround times, beginning March 31, 2026.⁵

Those timelines are not aspirational. They are regulatory. And plans that still route 65% of prior authorization through manual or partially manual channels³ will not meet them by hiring more reviewers. The only realistic path is systematic auto approval of decision ready requests, which means fixing intake, policy logic, and data quality first. We laid out the full compliance timeline in The Countdown to 72/7.

The HL7 Da Vinci FHIR Implementation Guides (CRD, DTR, and PAS) provide the technical scaffolding for this shift, enabling real time coverage requirement discovery and electronic prior authorization submission.⁶ Plans that invest in FHIR based infrastructure now are not just meeting a compliance deadline. They are building the foundation for auto approvals that actually scale.

The Bottom Line

If your auto approval rate is stagnant, the problem is not your approval logic. It is everything upstream: incomplete intake, fragmented policy, and data your system cannot trust.

Start by measuring what percentage of prior authorization requests arrive decision ready. Complete, structured, and policy aligned before they hit clinical review. That number is your ceiling for sustainable auto approvals. Everything you do to raise it directly reduces manual review volume, improves turnaround performance, and positions your plan for CMS 0057 F compliance.

At Mizzeto, Smart Auth was designed to close exactly this gap. Not by approving more requests automatically, but by ensuring the right requests never need manual review in the first place.

If auto approvals exist in your organization but still feel fragile, that fragility is the signal. Let’s talk about it.

Sources

¹ American Medical Association, “2024 AMA Prior Authorization Physician Survey,” 2024. ama-assn.org

² KFF, “Medicare Advantage Insurers Made Nearly 53 Million Prior Authorization Determinations in 2024,” January 2025. kff.org

³ CAQH, “Priority Topics: Prior Authorization,” 2024. caqh.org

⁴ 4sight Health, “The Costly Lever of Prior Authorization” (citing 2023 CAQH Index data), February 2024. 4sighthealth.com

⁵ Centers for Medicare & Medicaid Services, “CMS Interoperability and Prior Authorization Final Rule (CMS 0057 F),” January 17, 2024. cms.gov

⁶ CAQH CORE, “Navigating the CMS 0057 Final Rule: A Guide for Implementing Prior Authorization Requirements,” 2024. caqh.org

Jan 30, 20246 min read

February 5, 2026

2

min read

Article

Prior Authorization Improvement: Why 'Faster' Is the Wrong Goal

For years, prior authorization improvement efforts have centered on one metric: speed. Faster turnaround times. Shorter queues. Quicker determinations. When backlogs grow, the instinctive response is to push harder, add staff, tighten SLAs, accelerate intake, automate submission.

And yet, despite sustained investment, many health plans find themselves in a familiar place. Requests move faster into the system, but decisions do not come out any cleaner. Appeals rise. Clinical teams feel busier, not better supported. Regulatory scrutiny intensifies.  

The problem isn’t that health plans aren’t moving quickly enough. It’s that they’re optimizing for the wrong outcome.

The Question Leaders Should Be Asking

The critical question facing payer executives today is not how to make prior authorization faster. It is how to make authorization outcomes decision-ready.

In theory, prior authorization is a linear process. A request arrives. Medical necessity is assessed. A decision is rendered and communicated. In practice, speed at the front of the process often exposes fragility downstream. Requests arrive sooner, but incomplete. Data flows faster, but inconsistently. Clinical documentation is attached, but not usable.

What feels like progress—shorter intake cycles, higher submission volumes—often masks a deeper inefficiency: decisions still require the same amount of searching, clarifying, and rework. Sometimes more.

When speed becomes the primary goal, organizations optimize how fast work enters the system, not how effectively it can be resolved.

Why Faster Intake Often Slows Decisions

In our experience working with payer organizations, most delays in prior authorization do not occur because reviewers are slow. They occur because reviewers are forced to reconstruct meaning from poorly prepared inputs.

Requests arrive with missing or mis-keyed information. Clinical notes are uploaded as hundreds of unstructured pages. Policy criteria are technically met, but not clearly demonstrated. Nurses and physicians spend their time hunting for evidence rather than applying judgment.  

A routine imaging authorization, for example, may arrive with a 200-page chart attached—office notes, lab results, historical encounters spanning years. The information needed to approve the request may exist somewhere in the record, but reviewers must sift through dozens of irrelevant pages to find it. The delay isn’t clinical complexity. It’s the effort required to locate and validate the right signal inside too much noise. That friction compounds downstream, creating a clinical review bottleneck where highly trained staff spend their time searching for context instead of making decisions.

Accelerating intake without addressing these issues simply increases the volume of work that is not ready to be decided. Each incomplete request introduces pauses, clarifications, and handoffs. What should have been a single pass through the system becomes multiple touches across multiple teams.

From the outside, this looks like insufficient capacity. From the inside, it is capacity being quietly consumed by avoidable friction. Across the U.S. health care system, administrative burden tied to prior authorization contributes to multi-billion dollar annual costs, reflecting how inefficient processes absorb payer and provider resources long before clinical review begins.1

This is where many modernization efforts stall. Automation accelerates submission and routing, but PA automation alone does not change the quality of what enters the system. Providers submit more requests because it is easier to do so. Intake teams process them faster. Clinical reviewers inherit the same defects at higher velocity. Speed amplifies whatever already exists—and when work is not decision-ready, it multiplies rework rather than reducing it

What High-Performing Plans Optimize Instead

Organizations that consistently control prior authorization performance focus less on turnaround time and more on decision quality at entry.

They ensure requests arrive complete and structured, reducing manual re-keying and downstream correction. Reflecting this shift, a significant proportion of health plans have already implemented electronic prior authorization systems, signalling both the complexity of modern workflows and the growing emphasis on reducing manual friction.2 They normalize data so policy criteria can be evaluated consistently. They surface the specific clinical evidence needed for a decision, rather than forcing reviewers to search entire records. And they treat policy logic as a shared, governed asset—not something interpreted differently by each reviewer.

As a result, their systems move work through once. Appeals decrease because rationales are timely and clear. Clinical teams spend their time applying judgment instead of assembling context. Speed improves, but as a consequence of better design, not as the primary objective.

The shift is subtle but decisive. The goal is no longer faster authorization. It is fewer touches per authorization.

Why This Matters Now

Prior authorization sits at the intersection of cost control, access, and regulatory oversight. As CMS and other regulators increasingly expect decisions to be explainable, not just defensible—as reinforced by the CMS prior authorization rule—the cost of prioritizing speed over clarity rises. Under the CMS Interoperability and Prior Authorization final rule (CMS-0057-F), impacted payers must provide prior authorization decisions within 72 hours for urgent requests and seven calendar days for standard requests, and include specific reasons for denials to improve transparency and explainability of decisions.3 The rule shifts expectations away from throughput alone and toward consistency, traceability, and timely rationale.

Systems that rely on heroics and overtime may hit SLAs in the short term, but they accumulate risk. Systems designed for decision readiness scale more predictably and withstand scrutiny more effectively.

What executives experience as utilization management pressure is rarely a failure of effort. It is a signal that the system has been optimized for motion, not resolution.

At Mizzeto, we work with payer organizations to address this exact gap—connecting intake, clinical review, and policy logic so prior authorization decisions can be made efficiently, consistently, and explainably. This is the design philosophy behind Smart Auth, our prior authorization platform—ensuring requests arrive decision-ready, with structured intake, reduced rework, and clinical evidence surfaced in context rather than buried in charts.

Because in modern utilization management, sustained performance isn’t about pushing teams harder. It’s about removing the friction that never needed to be there in the first place.  

If your team is hitting SLAs but appeals keep climbing, let’s talk.

SOURCES

  1. Prior Authorization Statistics Statistics: Market Data Report 2026
  1. https://worldmetrics.org/prior-authorization-statistics/  
  1. https://content.govdelivery.com/accounts/USCMSMEDICAID/bulletins/3d5c65a

Jan 30, 20246 min read

January 30, 2026

2

min read

Article

Why Prior Authorization Backlogs Are Predictable (and Preventable)

Prior authorization backlogs are often described as volume problems. They show up as growing queues on operational dashboards, rising turnaround times, and escalating pressure on clinical teams. The explanation, almost reflexively, is that demand arrived faster than expected - too many requests, too little time.

But for most health plans, that explanation doesn’t hold up under scrutiny. Prior authorization backlogs are rarely caused by volume alone. They are caused by friction inside the authorization process itself. Friction that is well known, consistently repeated, and largely predictable.1

The Question Leaders Should Be Asking

The real question isn’t why prior authorization volume increased. It’s why so many authorization requests cannot move cleanly from intake to decision. In theory, prior auth is straightforward: receive a request, assess medical necessity, render a decision, notify the provider. In practice, the work looks very different.

Requests arrive incomplete. Key fields are missing or entered incorrectly. Clinical documentation is attached as hundreds of unstructured pages. Nurses and physicians spend their time searching for the few sentences that actually matter. Decisions stall because they are clinically complex, but because the information required to make them is fragmented, inconsistent, or buried.

Backlogs form not at the moment of clinical judgment, but long before that judgment can even begin.

Where Prior Authorization Actually Breaks Down

Most prior authorization backlogs are built upstream, during intake. Provider offices submit requests with missing clinical details, outdated codes, or attachments that don’t align to policy requirements.2 Internal coordinators re-key information from faxes, portals, or PDFs, introducing small errors that force rework later. Many prior authorization delays stem from manual processes and technology gaps, leading to inefficiency and error-prone workflows.3 Each defect is minor on its own, but together they create a steady drag on throughput.

Downstream, clinical reviewers inherit this friction. Nurses sift through large medical records to reconstruct timelines.4 Physicians pause decisions while clarifications are requested. Requests bounce between teams. Appeals increase, not always because the decision was wrong, but because the rationale was delayed or unclear. The backlog grows quietly, one stalled case at a time.

Why This Feels Like “Unexpected Volume”

From a distance, all of this looks like a surge. Executives see more cases aging past SLA. Leaders see staff working harder without visible progress. The conclusion is that volume must be overwhelming capacity. In reality, capacity is being consumed by rework.

Every incomplete intake, every mis-keyed field, every unclear policy reference turns a single request into multiple touches. What should have been a linear process becomes a loop. The backlog isn’t driven by how many requests arrived, it’s driven by how many times each request must be handled before it can be resolved. That multiplier effect is predictable. And yet, it’s rarely modeled.

Why Automation Alone Doesn’t Fix Prior Auth Backlogs

Automation is often applied at the intake layer, with the promise of speed. And it does make submission faster. Providers submit more requests. Intake teams process them more quickly. But if the underlying issues remain - missing information, poor data normalization, unstructured records, automation simply accelerates the arrival of flawed work.

Clinical teams feel this immediately. More cases arrive faster, but with the same defects. Reviewers spend less time waiting and more time searching, clarifying, and escalating.5

This is why many health plans modernize prior auth technology and still experience worsening backlogs. Automation has increased flow, but not decision readiness.

What High-Performing Plans Do Differently

Plans that control prior authorization backlogs focus less on speed and more on decision quality at intake.

They invest in ensuring requests arrive complete, structured, and aligned to policy requirements. They reduce manual keying wherever possible. They use technology to surface the right clinical evidence, rather than flooding reviewers with entire charts. And they treat policy interpretation as something that must scale consistently across reviewers, not as tribal knowledge.

Most importantly, they measure where requests stall and why. Backlogs are treated as signals: indicators of where information breaks down, where policy is unclear, or where rework is being introduced.

As a result, their queues are smaller and not because demand disappeared, but because requests move through the system once, instead of three or four times.

The Preventable Nature of Prior Authorization Backlogs

When prior authorization backlogs are framed as staffing or volume problems, they persist. When they are understood as information and workflow problems, they become solvable.

Prior auth backlogs don’t originate in clinical decision-making. They originate in how information enters the system and how much effort it takes to make that information usable.

What executives experience as UM backlogs are almost always prior authorization system outcomes. They reflect whether a health plan has designed prior authorization to support clean, defensible decisions at scale.

At Mizzeto, we work with payer organizations to address this exact gap. Connecting intake, clinical review, and policy logic so prior authorization decisions can be made efficiently, consistently, and explainably. Through Smart Auth, we help plans ensure requests arrive decision-ready: structured intake, reduced manual rework, and clinical evidence surfaced in context rather than buried in charts. Because in modern utilization management, sustained performance isn’t about pushing teams harder. It’s about removing the friction that never needed to be there in the first place.

SOURCES

  1. https://www.ama-assn.org/practice-management/prior-authorization/prior-authorization-delays-care-and-increases-health-care
  2. https://www.aha.org/system/files/media/file/2023/10/aha-urges-cms-to-finalize-the-improving-prior-authorization-processes-proposed-rule-letter-10-27-2023.pdf
  3. https://www.atlantisrcm.com/knowledge/single/prior-authorization-delays-the-new-billing-bottleneck-in-the-u-s
  4. https://www.aha.org/system/files/media/file/2022/10/Addressing-Commercial-Health-Plan-Challenges-to-Ensure-Fair-Coverage-for-Patients-and-Providers.pdf
  5. https://blog.nalashaahealth.com/prior-authorization-automation-for-healthplans

Jan 30, 20246 min read

January 26, 2026

2

min read