Article

The Challenges of Implementing and Upgrading Core Claims Systems

  • October 2, 2025

Every few years, health plan executives face the same question: stick with their core claims platform they know, or invest in the upgrade that promises better performance, new compliance capabilities, and future-proof scalability.  

On paper, upgrading a core claims systems seem straightforward. In practice, it is anything but. Behind every upgrade lies a tangle of operational disruption, hidden costs, and strategic decisions about whether incremental improvements are enough — or whether the organization needs a bigger rethink of its core system.

For CEOs, the issue is no longer whether their core systems can process claims reliably — it can. The real question is how to navigate the complexity of implementation and upgrades in a way that preserves agility, controls cost and positions the plan for a fast-changing regulatory environment.

The Implementation Challenge

Core claims systems are designed as a flexible, rules-driven platform that can accommodate the diverse needs of Medicaid, Medicare Advantage, and commercial lines of business. That flexibility is its strength — and its weakness.

Each new implementation or upgrade requires an enormous degree of configuration, testing, and integration. Payers must align their latest version of their claims systems with legacy systems (eligibility, prior authorization, provider directories, member portals), and each integration point introduces risk. A single misalignment in provider contracting rules or claims adjudication logic can cascade into payment errors, member dissatisfaction, or regulatory exposure.  

Moreover, because most core systems are often deeply customized during initial deployment, upgrades rarely feel like “plug and play.” They often require re-engineering workflows, re-validating interfaces, and retraining staff. What should be a version change can feel like a mini-implementation.

The Upgrade Bottleneck

Most payer CEOs hear the same refrain from their operations and IT leaders: “The upgrade will pay for itself in efficiency.” In theory, yes. New versions introduce better automation, compliance updates, and reporting tools. However, large-scale payer platforms were not designed in an era of real-time interoperability. Many of their core workflows still rely on batch processing, extensive customization, and legacy integration patterns. As a result, upgrades are rarely simple. Migrations can stretch across months, often introducing new bugs or defects that disrupt daily operations. The bottleneck is in execution.1

  • Downtime risk: Even short disruptions in claims processing create reputational and financial exposure. A day of delays can ripple into member grievances and provider abrasion.
  • Testing burden: Because payers often maintain highly customized rule sets, regression testing is complex and resource-intensive. IT teams must simulate thousands of claims scenarios before a go-live.
  • Cost creep: What starts as a “standard upgrade” can balloon into a multi-million-dollar initiative once consulting, testing, downtime mitigation, and staff retraining are factored in.

For CEOs, the bottleneck isn’t simply technical. It’s strategic: How many resources should be spent on making the old platform incrementally better, versus rethinking whether a next-generation solution is needed?

Regulatory Pressures

Upgrading a core claims system cannot be deferred indefinitely. Many core claims, enrollment, and utilization management systems remain siloed, limiting real-time insights and slowing operations. Regulatory requirements—like CMS’s interoperability and prior authorization rules (CMS-0057-F), network adequacy reporting, and transparency mandates—further raise the stakes, demanding standardized APIs, automated reporting, and faster turnaround.2 Without modernization, payers risk inefficiency, compliance gaps, and the inability to respond rapidly to operational pressures.

The compliance bar is also moving faster than typical upgrade cycles. A system refreshed every three to five years may not keep pace with annual regulatory changes. This creates a structural tension: the need for compliance agility versus the slow, heavy cadence of traditional upgrades.

The Organizational Strain

When implementations succeed, they can streamline claims workflows significantly. But these gains are not automatic, upgrades also test organizational resilience. Claims staff must learn new interfaces. Clinical teams relying on UM and PA modules must adapt workflows. The transition phase often requires running parallel systems, and custom integration work with provider portals, EHRs, and third-party vendors. Finance leaders face budget overruns. And executives must explain to boards why a platform upgrade is consuming so much capital and time.

For many plans, this strain is amplified by workforce realities. IT and operations teams are already lean. Pulling them into months of testing and implementation work diverts attention from member experience, provider relations, and innovation. The opportunity cost is real.

Making the Strategic Choice

Here is where CEOs must step back and ask the bigger question: Is the goal to modernize your core claims system, or to modernize the enterprise?

  • Incremental approach: Continue upgrading, absorb the disruption, and bolt on compliance tools as needed. This preserves continuity but risks technical debt and operational fatigue.
  • Transformational approach: Use the upgrade decision as a pivot point to evaluate alternative platforms, cloud-native solutions, or modular architectures that align with where the industry is headed.

Neither path is inherently right or wrong. What matters is clarity: knowing the true costs of incrementalism versus transformation and aligning the decision with the payer’s broader strategy.

Toward a Smarter Upgrade Strategy

So how should CEOs approach the next round of implementation or upgrade? A few guiding principles stand out:

  1. Treat upgrades as enterprise projects, not IT projects. The impact crosses claims, UM, provider relations, finance, and compliance. Governance must reflect that.
  1. Model total cost of ownership. Factor in not just licensing and consulting, but also downtime, retraining, and opportunity cost.
  1. Benchmark against regulatory timelines. Ask whether the upgrade cycle will keep pace with CMS mandates, or whether external modules will still be needed.
  1. Invest in interoperability first. Whether sticking with your current claims system or moving beyond it, APIs, FHIR compliance, and real-time data exchange should be the non-negotiable foundation.
  1. Build for flexibility. The real risk is not just being behind today but being unable to adapt tomorrow.

The Bottom Line

For payer CEOs, the question is not whether the platform can do the job. It can — and does, for millions of members nationwide. The real issue is whether the cost, complexity, and cadence of implementation and upgrades align with the demands of a regulatory environment that moves faster than traditional IT cycles.

Compliance is non-negotiable. Execution at speed and scale is existential.

At Mizzeto, we help health plans navigate the challenges of implementing and upgrading their core claims systems, turning complex technology transitions into smooth, high-impact changes. Our services streamline operations, modernize claims, and promote connectivity between disparate systems. By breaking down silos, automating data exchange, and delivering real-time operational insights, we help plans turn upgrades into a foundation for resilience, compliance, and measurable ROI.

Upgrading a claims system is more than a technical project—it’s a test of whether a health plan can translate technology into agility, compliance, and measurable impact.

SOURCES

  1. Payer Claims & Administration Platforms 2023 Vendor Performance in a Segmented Market
  2. CMS Interoperability & Prior Authorization Final Rule
Latest News

Latest Research, News , & Events.

Read More
icon
Article

AI Data Governance - Mizzeto Collaborates with Fortune 25 Payer

AI Data Governance

The rapid acceleration of AI in healthcare has created an unprecedented challenge for payers. Many healthcare organizations are uncertain about how to deploy AI technologies effectively, often fearing unintended ripple effects across their ecosystems. Recognizing this, Mizzeto recently collaborated with a Fortune 25 payer to design comprehensive AI data governance frameworks—helping streamline internal systems and guide third-party vendor selection.

This urgency is backed by industry trends. According to a survey by Define Ventures, over 50% of health plan and health system executives identify AI as an immediate priority, and 73% have already established governance committees. 

Define Ventures, Payer and Provider Vision for AI Survey

However, many healthcare organizations struggle to establish clear ownership and accountability for their AI initiatives. Think about it, with different departments implementing AI solutions independently and without coordination, organizations are fragmented and leave themselves open to data breaches, compliance risks, and massive regulatory fines.  

Principles of AI Data Governance  

AI Data Governance in healthcare, at its core, is a structured approach to managing how AI systems interact with sensitive data, ensuring these powerful tools operate within regulatory boundaries while delivering value.  

For payers wrestling with multiple AI implementations across claims processing, member services, and provider data management, proper governance provides the guardrails needed to safely deploy AI. Without it, organizations risk not only regulatory exposure but also the potential for PHI data leakage—leading to hefty fines, reputational damage, and a loss of trust that can take years to rebuild. 

Healthcare AI Governance can be boiled down into 3 key principles:  

  1. Protect People Ensuring member data privacy, security, and regulatory compliance (HIPAA, GDPR, etc.). 
  1. Prioritize Equity – Mitigating algorithmic bias and ensuring AI models serve diverse populations fairly. 
  1. Promote Health Value - Aligning AI-driven decisions with better member outcomes and cost efficiencies. 

Protect People – Safeguarding Member Data 

For payers, protecting member data isn’t just about ticking compliance boxes—it’s about earning trust, keeping it, and staying ahead of costly breaches. When AI systems handle Protected Health Information (PHI), security needs to be baked into every layer, leaving no room for gaps.

To start, payers can double down on essentials like end-to-end encryption and role-based access controls (RBAC) to keep unauthorized users at bay. But that’s just the foundation. Real-time anomaly detection and automated audit logs are game-changers, flagging suspicious access patterns before they spiral into full-blown breaches. Meanwhile, differential privacy techniques ensure AI models generate valuable insights without ever exposing individual member identities.

Enter risk tiering—a strategy that categorizes data based on its sensitivity and potential fallout if compromised. This laser-focused approach allows payers to channel their security efforts where they’ll have the biggest impact, tightening defenses where it matters most.

On top of that, data minimization strategies work to reduce unnecessary PHI usage, and automated consent management tools put members in the driver’s seat, letting them control how their data is used in AI-powered processes. Without these layers of protection, payers risk not only regulatory crackdowns but also a devastating hit to their reputation—and worse, a loss of member trust they may never recover.

Prioritize Equity – Building Fair and Unbiased AI Models 

AI should break down barriers to care, not build new ones. Yet, biased datasets can quietly drive inequities in claims processing, prior authorizations, and risk stratification, leaving certain member groups at a disadvantage. To address this, payers must start with diverse, representative datasets and implement bias detection algorithms that monitor outcomes across all demographics. Synthetic data augmentation can fill demographic gaps, while explainable AI (XAI) tools ensure transparency by showing how decisions are made.

But technology alone isn’t enough. AI Ethics Committees should oversee model development to ensure fairness is embedded from day one. Adversarial testing—where diverse teams push AI systems to their limits—can uncover hidden biases before they become systemic issues. By prioritizing equity, payers can transform AI from a potential liability into a force for inclusion, ensuring decisions support all members fairly. This approach doesn’t just reduce compliance risks—it strengthens trust, improves engagement, and reaffirms the commitment to accessible care for everyone.

Promote Health Value – Aligning AI with Better Member Outcomes 

AI should go beyond automating workflows—it should reshape healthcare by improving outcomes and optimizing costs. To achieve this, payers must integrate real-time clinical data feeds into AI models, ensuring decisions account for current member needs rather than outdated claims data. Furthermore, predictive analytics can identify at-risk members earlier, paving the way for proactive interventions that enhance health and reduce expenses.

Equally important are closed-loop feedback systems, which validate AI recommendations against real-world results, continuously refining accuracy and effectiveness. At the same time, FHIR-based interoperability enables AI to seamlessly access EHR and provider data, offering a more comprehensive view of member health.

To measure the full impact, payers need robust dashboards tracking key metrics such as cost savings, operational efficiency, and member outcomes. When implemented thoughtfully, AI becomes much more than a tool for automation—it transforms into a driver of personalized, smarter, and more transparent care.

Integrated artificial intelligence compliance
FTI Technology

Importance of an AI Governance Committee

An AI Governance Committee is a necessity for payers focused on deploying AI technologies in their organization. As artificial intelligence becomes embedded in critical functions like claims adjudication, prior authorizations, and member engagement, its influence touches nearly every corner of the organization. Without a central body to oversee these efforts, payers risk a patchwork of disconnected AI initiatives, where decisions made in one department can have unintended ripple effects across others. The stakes are high: fragmented implementation doesn’t just open the door to compliance violations—it undermines member trust, operational efficiency, and the very purpose of deploying AI in healthcare.

To be effective, the committee must bring together expertise from across the organization. Compliance officers ensure alignment with HIPAA and other regulations, while IT and data leaders manage technical integration and security. Clinical and operational stakeholders ensure AI supports better member outcomes, and legal advisors address regulatory risks and vendor agreements. This collective expertise serves as a compass, helping payers harness AI’s transformative potential while protecting their broader healthcare ecosystem.

Mizzeto’s Collaboration with a Fortune 25 Payer

At Mizzeto, we’ve partnered with a Fortune 25 payer to design and implement advanced AI Data Governance frameworks, addressing both internal systems and third-party vendor selection. Throughout this journey, we’ve found that the key to unlocking the full potential of AI lies in three core principles: Protect People, Prioritize Equity, and Promote Health Value. These principles aren’t just aspirational—they’re the bedrock for creating impactful AI solutions while maintaining the trust of your members.

If your organization is looking to harness the power of AI while ensuring safety, compliance, and meaningful results, let’s connect. At Mizzeto, we’re committed to helping payers navigate the complexities of AI with smarter, safer, and more transformative strategies. Reach out today to see how we can support your journey.

February 14, 2025

5

min read

Feb 21, 20242 min read

Article

5 QNXT Implementation Challenges Health Plans Must Solve

Few initiatives test a health plan's operational resilience like a core claims system implementation. According to research from McKinsey and the University of Oxford, 66% of enterprise software projects experience cost overruns, and 17% go so badly they threaten the organization's existence.¹ For health plans implementing QNXT, the stakes include regulatory compliance, provider relationships, and member satisfaction—all at risk if the project goes sideways.

The good news: most implementation failures are preventable. Understanding where projects typically break down allows health plans to plan proactively and avoid the most common pitfalls.

Data Migration and Conversion Complexity

Every QNXT implementation begins with a deceptively simple question: how do we move our data? The answer is never straightforward. Legacy claims systems store member information, provider records, and historical claims in formats that rarely align with QNXT's data model. Mapping decades of accumulated data—complete with inconsistencies, duplicates, and outdated codes—requires meticulous planning.

The risks are significant. Incomplete member histories create gaps in care coordination. Misaligned provider data leads to incorrect reimbursements. Claims history errors trigger audit findings and compliance exposure.

What works: Successful migrations follow a phased approach. Extract and profile legacy data early to understand its quality and structure. Build robust mapping rules with input from both technical staff and business users who understand the data's context. Validate extensively in parallel testing environments before cutover—identifying discrepancies in a test environment costs far less than fixing them in production. Budget adequate time for data cleansing; it almost always takes longer than planned.

Benefit Configuration Complexity

QNXT's flexibility is both its greatest strength and its most significant implementation hurdle. Configuring benefits correctly requires understanding the interplay between plan-level and product-level settings, accumulator logic, coordination of benefits rules, and state-specific requirements for Medicaid and Medicare Advantage populations.

Configuration errors rarely surface immediately. They emerge weeks or months later as claims adjudicate incorrectly, members receive wrong explanations of benefits, or accumulators fail to track properly toward deductibles and out-of-pocket maximums. By then, the remediation effort compounds exponentially.

What works: Prioritize your highest-volume, highest-risk benefit configurations for early testing. Build comprehensive test case libraries that cover edge cases—not just the happy path. Document configuration decisions as you make them; institutional knowledge disappears quickly when team members move on. Engage business analysts who understand both the regulatory requirements and QNXT's configuration nuances. For Medicaid and Medicare Advantage plans, involve compliance staff early to ensure configurations align with CMS requirements.

Auto-Adjudication Rate Optimization

Go-live is just the beginning. Many health plans discover that their auto-adjudication rates plummet after implementing QNXT. The industry standard benchmark for auto-adjudication hovers around 80%, with best practice targets above 85%.² Yet many organizations fall short, with first-pass rates ranging from 10% to 70%.³

The financial impact is substantial. An auto-adjudicated claim costs health insurers cents on the dollar, while one requiring human intervention costs approximately $20. Every claim that falls out of auto-adjudication strains examiner capacity and extends turnaround times.

Low auto-adjudication rates typically stem from a few root causes: overly conservative editing rules, incomplete provider data, poorly configured fee schedules, or business rules that don't account for real-world claim variations. The system works as configured—the configuration simply doesn't reflect operational reality.

What works: Analyze pend patterns weekly in the months following go-live. Identify which edits generate the most fallout and assess whether they're truly necessary or just overly cautious defaults. Tune provider matching logic to reduce false pends from minor data discrepancies. Refine authorization integration so valid authorizations are properly recognized. Establish a continuous improvement cycle rather than treating go-live as the finish line.

Integration with Your Existing Ecosystem

QNXT doesn't operate in isolation. It must connect with EDI gateways for 837, 835, 834, and 270/271 transactions. It needs interfaces to provider portals, member platforms, care management systems, and payment integrity vendors. Each integration point introduces complexity—and potential failure modes.

The challenge intensifies when health plans operate hybrid environments during transition periods. Data must flow correctly between legacy and new systems without duplication, loss, or timing mismatches. Real-time authorization lookups must perform at production scale. Provider directories must stay synchronized across platforms.

Research shows that 51% of companies experience operational disruptions when going live with new enterprise systems, often due to integration failures.

What works: Start integration testing earlier than you think necessary. Build end-to-end test scenarios that simulate production volumes and edge cases. Document every interface specification and establish clear ownership for each connection. Consider middleware layers to buffer complexity, but account for the latency and additional failure points they introduce. Plan for a parallel processing period where both old and new systems run simultaneously, allowing you to validate results before fully cutting over.

Training, Change Management, and Staffing Gaps

Even a perfectly configured QNXT instance fails if your people can't use it effectively. Research indicates that up to 75% of the financial benefits from new enterprise systems are directly linked to effective organizational change management—yet many organizations allocate less than 10% of their total project budget to this critical area.

Implementation partners eventually leave. Institutional knowledge walks out the door. Claims examiners, configuration analysts, and IT staff must internalize new workflows, screens, and processes—often while maintaining production on legacy systems.

The training gap is particularly acute for configuration roles. QNXT benefit configuration requires specialized expertise that takes months to develop. Many health plans underestimate this learning curve and find themselves dependent on external consultants long after go-live.

What works: Build knowledge transfer into implementation contracts from day one. Document configuration decisions and create runbooks for common scenarios. Identify internal staff for intensive mentorship during the project—not just attendance at training sessions, but hands-on involvement in configuration work. Plan for productivity dips in the months following go-live and staff accordingly. Consider whether supplemental staffing can bridge capability gaps during the transition period rather than burning out your core team.

The Five Core QNXT Implementation Challenges

For quick reference, successful QNXT implementations address these critical areas:

  1. Data migration and validation — ensuring complete, accurate conversion from legacy systems through phased extraction, robust mapping, and extensive parallel testing
  1. Benefit configuration — methodical setup with comprehensive testing across all lines of business, with early compliance involvement for government programs
  1. Auto-adjudication optimization — continuous tuning post-go-live to maximize straight-through processing and reduce costly manual intervention
  1. System integration — reliable connections to EDI, portals, and downstream vendors, tested at production scale before cutover
  1. Training and change management — building internal expertise through hands-on involvement, not just classroom training, with realistic productivity expectations

Moving Forward

QNXT implementations are complex, but complexity doesn't have to mean chaos. Health plans that approach these projects with realistic timelines, thorough testing protocols, and genuine investment in their people consistently outperform those who underestimate the effort involved.

The patterns of failure are well-documented. So are the patterns of success. The difference usually comes down to preparation, honest assessment of internal capabilities, and willingness to invest in the areas—like change management and post-go-live optimization—that don't appear on the software license invoice but determine whether the project delivers value.

About Mizzeto

At Mizzeto, we help health plans navigate high-stakes platform transitions with the same rigor they apply to clinical and regulatory decisions. Our teams support QNXT implementations and optimization across Medicare, Medicaid, Exchange, and specialty lines of business—bridging strategy, configuration, and operational execution. The goal isn’t just a successful go-live, but durable performance: higher auto-adjudication, cleaner integrations, and internal teams equipped to govern the system long after consultants exit.

If your organization is preparing for a QNXT implementation—or working to stabilize and optimize one already in production—we’re always open to a thoughtful conversation.

Sources

  1. McKinsey & Company and BT Centre for Major Program Management at the University of Oxford. "Delivering Large-Scale IT Projects On Time, On Budget, and On Value." https://www.forecast.app/blog/66-of-enterprise-software-projects-have-cost-overruns
  1. Healthcare Finance News. "Claims processing is in dire need of improvement, but new approaches are helping." https://www.healthcarefinancenews.com/news/claims-processing-dire-need-improvement-new-approaches-are-helping
  1. HealthCare Information Management. "Understanding Auto Adjudication." https://hcim.com/understanding-auto-adjudication/
  1. Healthcare Finance News. "Claims processing is in dire need of improvement, but new approaches are helping." https://www.healthcarefinancenews.com/news/claims-processing-dire-need-improvement-new-approaches-are-helping
  1. RubinBrown ERP Advisory Services. "Top ERP Insights & Statistics." https://kpcteam.com/kpposts/top-erp-statistics-trends
  1. Sci-Tech-Today. "Enterprise Resource Planning (ERP) Software Statistics." https://www.sci-tech-today.com/stats/enterprise-resource-planning-erp-software-statistics/

Jan 30, 20246 min read

December 31, 2025

2

min read

Article

CMS Isn't Auditing Decisions — It’s Auditing Proof

Why utilization management may determine who clears the coming audit wave—and who doesn’t.

CMS doesn’t usually announce a philosophical shift. It signals it. And over the past year, the signals have grown louder: tougher scrutiny of utilization management, more rigorous document reviews, and an expectation that payers show—not simply assert—how they operate. The 2026 audit cycle will be the first real test of this new posture.

For health plans, the question is no longer whether they can survive an audit. It’s whether their operations can withstand a level of transparency CMS is poised to demand.

What CMS Is Really Asking for in 2026

Behind every audit protocol lies a single question: Does this plan operate in a way that reliably protects members? Historically, payers could answer that question through narrative explanation—clinical notes, supplemental files, post-hoc clarifications. Those days are ending. CMS wants documentation that stands on its own, without interpretation. Decisions must speak for themselves.

That shift lands hardest in utilization management. A UM case is a dense intersection of clinical judgment, policy interpretation, and regulatory timing. A single inconsistency—a rationale that doesn’t match criteria, a letter that doesn’t reflect the case file, a clock mismanaged by a manual workflow—can overshadow an otherwise correct decision.

The emerging audit philosophy is clear: If the documentation doesn’t prove the decision, CMS assumes the decision cannot be trusted.

Where the System Breaks: UM as the Audit Pressure Point

Auditors are increasingly zeroing in on UM because it sits at the exact point where member impact is felt: the determination of whether care moves forward. And yet the UM environment inside most plans is astonishingly fragile.

Case files exist across platforms. Reviewer notes vary widely in depth and style. Criteria are applied consistently in theory but documented inconsistently in practice. Timeframes live in spreadsheets or side systems. Letter templates multiply to meet state and line-of-business requirements, and each variation introduces new chances for error.

Delegated entities add another degree of variation. AI tools introduce sophistication—but also opacity. And UM letters, already the last mile, turn into the site of the most findings. The audit findings from recent years reveal the same weak points over and over: documentation mismatches, missing citations, unclear rationales, inadequate notice language, or timing failures that stem not from malice but from operational drift.

CMS sees all of this as symptomatic of one problem: fragmentation.

Why CMS’s New Expectations Make Sense—Even If They Hurt

To CMS, consistency is fairness. If two reviewers evaluating the same procedure cannot produce the same rationale, use the same criteria, or generate the same clarity in their letters, then members cannot rely on the decisions they receive. From the regulator’s perspective, this isn’t about paperwork—it’s about equity. Documentation is the proof that similar members receive similar decisions under similar circumstances.

Health plans know this in theory. But the internal pressures—volume, staffing variability, outdated systems, multiple point solutions, off-platform decisions, peer-to-peer nuances—make uniformity nearly impossible. CMS’s response is simple: Technical difficulty is not an excuse. Variation is a governance failure.

This is why the agency is preparing to scrutinize AI tools with the same rigor as human reviewers. Automation that produces variable results, or outputs that do not exactly match the case file, is no different from human inconsistency.

CMS is not anti-AI. It is anti-opaque-AI.

What an Audit-Ready UM Operation Actually Looks Like

Plans that will succeed in 2026 are building something different: a coherent operating system that eliminates guesswork. In these models, the case file becomes a single source of truth. Clinical summaries, criteria references, rationales, and letter text are drawn from the same structured data—so the letter is a natural extension of the decision, not a separate narrative created afterward.

Delegated entities operate under unified templates, shared quality rules, and real-time oversight rather than annual check-ins. AI is governed like a medical policy: with defined behaviour, monitoring, version control, and auditable outputs. And timeframes are treated with claims-like precision, not as deadlines managed by human vigilance.

This is not just modernization—it is a philosophical shift. A move from “reviewers record what happened” to “the system records what is true.”

Preparing for 2026 Starts in 2025

The path forward isn’t mysterious; it’s disciplined. Plans need to invest the next year in cleaning up documentation, consolidating UM data flows, reducing template drift, tightening delegation oversight, and putting governance around every automated tool in the UM pipeline. The plans that do this will walk into audits with confidence. The plans that don’t will rely on explanations CMS is increasingly unwilling to accept.

The Bottom Line

The 2026 CMS audit cycle isn’t a compliance event—it’s an operational reckoning. CMS is asking payers to demonstrate integrity, not describe it. And utilization management will be the proving ground. The strongest plans are already acting. The others will be forced to.

At Mizzeto, we help health plans build the documentation, automation, and governance foundation needed for a world where every UM decision must be instantly explainable. Because in the next audit cycle, clarity isn’t optional—it’s compliance.

Jan 30, 20246 min read

December 5, 2025

2

min read

Article

Why UM Letters Still Slow Down Health Plans

In the age of AI-driven utilization management (UM), one paper trail still refuses to move at the speed of automation: the UM letter.

Whether it’s an approval, denial, or request for additional information, these letters remain the last mile of every UM decision, and too often, the slowest. Despite sophisticated review platforms and integrated medical policy engines, many health plans still rely on legacy templates, fragmented data sources, and manual QA loops to generate what regulators consider a fundamental compliance artifact. UM letters are not just a formality; they are a legal requirement. Under CMS rules, plans must issue timely, adequate notice of adverse benefit determinations, explaining both the rationale and appeal rights to members.

The irony is hard to miss: while decisions are made in seconds, the documentation that justifies them can take days.

The Real Question Behind the Delay

The issue isn’t simply that UM letters take time. It’s why they take time, and what that delay reveals about deeper system inefficiencies.

For health plans, the question isn’t “How can we make letters faster?” It’s “Why are they so hard to get right in the first place?”

A single UM letter must synthesize clinical reasoning, regulatory precision, and plain-language clarity all aligned with CMS, NCQA, and state-specific notice requirements. The challenge is not in the writing, but in orchestrating inputs from multiple systems: clinical review notes, policy citations, benefit text, and provider data.

When those inputs don’t talk to each other, letter generation becomes a bottleneck that slows down turnaround times, increases error risk, and erodes member trust.

Why Templates Must Meet More Than Just Style

UM letter templates are not just administrative artifacts; they are regulatory documents. Under Centers for Medicare & Medicaid Services (CMS) rules, letters providing notice of adverse benefit determinations must meet detailed content and timing standards. For example, the regulation at 42 CFR § 438.404 mandates that notices be in writing and explain the reasons for denial, reference the medical necessity criteria or other processes used, provide the enrollee’s rights to copies of evidence and appeal, and outline procedures for expedited review.1

In practice, this means letter templates must include:

  • A clear description of the decision and the specific denial reason,
  • The criteria or protocol relied upon (with member access to it free of charge),
  • Instructions on how to appeal (standard and expedited),
  • Rights to benefits continuation pending appeal under defined circumstances.2

Failure to incorporate these elements or to issue the notice within required timeframes can expose plans to audit findings, grievances, and regulatory penalties. The tighter the regulatory lens becomes, the less room there is for “good enough” templates. Each health plan must view letter-generation not as a clerical task but as a compliance checkpoint. And beyond the regulatory content itself, many programs require that UM notices be written in plain, accessible language at the 6th-8th grade level, to ensure members can understand their rights and the basis for a decision.

Five Friction Points Inside UM Letter Workflows

Every health plan faces variations of the same problem, but the underlying breakdowns tend to cluster around five recurring fault lines:

  1. Fragmented Data Sources
    Critical information lives in multiple systems. UM platforms, claims engines, and policy libraries. Each transfer adds latency and the potential for mismatch.
  1. Template Explosion
    Over time, teams accumulate hundreds of letter templates to meet overlapping state and product requirements. Maintaining these manually makes even minor updates a compliance risk.
  1. Human Review Dependency
    Because UM letters must be clinically and legally precise, most organizations rely on multiple layers of human QA. That review process, while necessary, often adds 24–48 hours to turnaround.
  1. Regulatory Complexity
    CMS and state requirements around adverse determination language, appeal rights, and timing create constant moving targets. Even small wording deviations can trigger audit findings.3
  1. Technology Gaps
    Many UM systems weren’t designed for dynamic document assembly. Integrating clinical rationale, structured data, and plain-language output requires middleware or manual intervention.

Each of these friction points compounds the next, creating a cycle of rework, delay, and compliance exposure even in otherwise modernized UM environments.

Connecting the Dots: What the Delay Really Costs

The operational burden of slow UM letters goes far beyond staff productivity. It directly affects regulatory performance, provider satisfaction, and member experience.

Delayed or inconsistent notices can:

  • Violate CMS and NCQA timeliness standards, exposing plans to corrective action.4
  • Create confusion for providers awaiting determinations, delaying care coordination.
  • Generate avoidable grievances and appeals, further burdening UM teams.

The cost is not just administrative, it’s reputational. Every late or unclear letter represents a breakdown in transparency at the very point where payers are most visible to members and regulators alike.5

Building a Smarter Letter Ecosystem

Leading plans are tackling the problem not with more templates, but with smarter orchestration.

The most effective UM letter modernization strategies share three principles:

  • Structured Input, Dynamic Output: Capture decision data in structured fields early in the UM process so letters can be assembled automatically with consistent language and logic.
  • Governance-Driven Templates: Centralize letter libraries under compliance governance, ensuring real-time updates to regulatory text and benefit language.
  • Human-in-the-Loop Automation: Use AI-assisted generation to draft letters but retain clinical reviewer oversight for rationale and tone.

The goal isn’t to remove people, it’s to remove friction. Automation should serve precision, not replace it.

When designed correctly, next-generation letter systems can cut turnaround time by 50–70%, reduce rework, and strengthen audit readiness while making communications clearer for both providers and members.

The Bottom Line

UM letters may seem administrative, but they are where compliance, communication, and care converge. If denials are the visible output of your UM program, letters are the proof of its integrity.

For payers, the question isn’t whether letters can be automated, it’s whether they can be governed with the same rigor as the decisions they document.

At Mizzeto, we help health plans modernize UM letter workflows, integrating automation, policy governance, and compliance intelligence into one seamless ecosystem.  

SOURCES

  1. 42 CFR & 438.404 - Timely and Adequate Notice of Adverse Benefit Determination
  2. Medicaid Managed Care State Guide
  3. CMS Coverage Appeals Job Aid
  4. Utilization Management Accreditation - A Quality Improvement Framework
  5. Denials & Appeals in Medicaid Managed Care

Jan 30, 20246 min read

November 18, 2025

2

min read

Article

Appeals as a Mirror: What Overturned Denials Reveal About Broken UM Processes

In utilization management (UM), few metrics speak louder—or cut deeper—than overturn rates. When a significant share of denied claims are later approved on appeal, it’s rarely just about an individual decision. It’s a reflection of something bigger: inconsistent policy interpretation, reviewer variability, documentation breakdowns, or outdated clinical criteria.

Regulators have taken notice. CMS and NCQA increasingly treat appeal outcomes as a diagnostic lens into whether a payer’s UM program is both fair and clinically grounded.1 High overturn rates now raise questions not just about accuracy, but about governance.

In Medicare Advantage alone, more than 80 % of appealed denials were overturned in 2023 — a statistic that underscores how often first-pass decisions fail to hold up under scrutiny.2 The smartest health plans have started to listen. They’re treating appeals not as administrative noise—but as signals.

What Overturned Denials Are Really Saying

Every overturned denial tells a story. It asks, implicitly: Was the original UM decision appropriate, consistent, and well-supported?

Patterns in appeal outcomes can expose weaknesses that internal audits often miss. For example:

  • Repeated overturns for a single service category often signal misaligned or outdated policies.
  • Overturns concentrated among certain reviewers may point to training or workflow inconsistencies.
  • Successful appeals after peer-to-peer discussions often reveal documentation or communication gaps between provider and plan.

These trends mirror national data showing that many initial denials are overturned once additional clinical details are provided, highlighting communication—not medical necessity—as the core failure.3 The takeaway is simple but powerful: Appeal data is feedback—from providers, from regulators, and from your own operations—about how well your UM program is working in the real world.

The Systemic Signals Behind High Overturn Rates

When you look beyond the surface, overturned denials trace back to five systemic fault lines common across payer organizations:

  1. Policy Rigor vs. Flexibility
    Medical necessity criteria must balance evidence-based precision with real-world adaptability. Policies written without clinical nuance—or not updated frequently enough—tend to generate denials that can’t stand up under appeal.
  1. Reviewer Variability
    Even with clear policies, human interpretation introduces inconsistency. Differences in specialty expertise, decision fatigue, or tool usage can lead to unpredictable outcomes.
  1. Provider Documentation Gaps
    Many initial denials are simply the result of incomplete records. When appeals are approved after additional information surfaces, the problem isn’t inappropriate care—it’s communication failure.
  1. Operational Friction
    Lag times between intake, review, and notification can distort first-pass decisions. Data fragmentation between UM, claims, and provider portals compounds the issue.
  1. Weak Feedback Governance
    Too often, appeal outcomes are logged but not analyzed. Mature UM programs close the loop—using overturned denials to retrain reviewers, refine policies, and target provider outreach.

Federal oversight agencies have long flagged this issue: an OIG review found that Medicare Advantage plans overturned roughly three-quarters of their own prior authorization denials, suggesting systemic review flaws and weak first-pass decision integrity.4

Turning Appeals into a Feedback Engine

Leading payers are reframing appeals from a reactive function to a proactive improvement system.
They’re building analytics that transform overturn data into actionable intelligence:

  • Policy Calibration: Tracking which criteria most often lead to successful appeals reveals where policies may be too restrictive or outdated.
  • Reviewer Performance: Overlaying overturn trends with reviewer data helps identify where training or peer review support is needed.
  • Provider Partnership: By sharing de-identified appeal insights, plans can help provider groups strengthen documentation and pre-service submissions.
  • Regulatory Readiness: Demonstrating a closed-loop feedback process strengthens NCQA compliance and positions the plan as an adaptive, learning organization.

This approach turns what was once a compliance burden into a continuous-learning advantage.

From Reversal to Reform

High overturn rates are not just a symptom—they’re an opportunity. Each reversed denial offers a data point that, aggregated and analyzed, can make UM programs more consistent, more transparent, and more clinically aligned.

The goal isn’t to eliminate appeals. It’s to make sure every appeal teaches the organization something useful—about process integrity, provider behavior, and the evolution of clinical practice.

When health plans start to see appeals as mirrors rather than metrics, UM stops being a gatekeeping exercise and becomes a governance discipline.

The Bottom Line

Overturned denials aren’t administrative noise—they’re operational intelligence. They show where your policies, people, and processes are misaligned, and where trust between payer and provider is breaking down.

For forward-thinking plans, this is the moment to reimagine UM as a learning system.
At Mizzeto, we help health plans turn appeal data into strategic insight—linking overturned-denial analytics to reviewer training, policy governance, and compliance reporting. Because in utilization management, every reversal has a lesson—and the best programs are the ones that listen.

SOURCES

  1. National Committee for Quality Assurance (NCQA). Overview of Proposed Updates to Utilization Management Accreditation 2026
  2. Kaiser Family Foundation (KFF). “Nearly 50 Million Prior Authorization Requests Were Sent to Medicare Advantage Insurers in 2023"
  3. American Medical Association (AMA). “Prior Authorization Denials Up Big in Medicare Advantage"
  4. U.S. Department of Health & Human Services, Office of Inspector General (OIG). Some Medicare Advantage Organization Denials of Prior Authorization Requests Raise Concerns About Beneficiary Access to Medically Necessary Care

Jan 30, 20246 min read

November 4, 2025

2

min read