Most health plan operations leaders can tell you their average handle time and their cost per call. Very few can tell you what a single transferred call actually costs when you follow it all the way through the system.
That transferred call triggers a second interaction at $4.90 or more.[1] It resets the resolution clock. It inflates the member’s frustration, which showsup months later in a CAHPS survey the plan cannot retroactively fix. And if the plan is running a Medicare Advantage contract, that CAHPS score is tied directly to Star Ratings, which determine quality bonus payments worth tens of millions in annual revenue.[2]
The real problem is not the cost of one bad call. It is that the way most health plans measure call center quality today was designed for a different era, and it is structurally incapable of seeing how many bad calls are happening, or why.
The FCR Gap Nobody Talks About
First call resolution is the most important metric in any health plan contact center. SQM Group’s benchmarking across more than 100 leading North American healthcare call centers puts the industry average FCR rate at 71%. Only 4% of those centers reach the world-class threshold of 80% or higher.[3]
That means roughly 29% of member calls require a callback, transfer, or follow-up. In some studies, the number is far worse: one analysis found the average healthcare FCR rate sitting at 52%, meaning more than half of all member inquiries go unresolved on first contact.[4]
Each of those unresolved calls carries a compounding cost. SQM Group’s research shows that a 1% improvement in FCR translates to approximately $286,000 in annual operational savings for a typical midsize call center.[5] That is not a theoretical model. That is reduced repeat volume, shorter queues, and lower agent workload.
Now consider the member experience side. Satisfaction drops roughly 15% every time a member has to call back about the same issue.[6] The call that started as a routine benefits question becomes, by the third attempt, a complaint. And complaints have an FCR rate of just 47%.[7]
Transfers, Mis-Routes, and the Cost Multiplier
Healthcare call centers face transfer rates as high as 19%.[8] Each transfer does three expensive things simultaneously.
First, it adds direct cost. A transferred call requires a second agent, a second set of minutes, and often a longer total handle time than a single well-routed interaction. With average handle times running 6.6 minutes and average costs at $4.90 per call, a transferred call effectively doubles the expense of that member interaction.
Second, it destroys member confidence. Talk desk’s survey of 330 health plan members found that 78% described their experience with their insurers as less than seamless. The leading cause was not claims denials or billing errors. It was poor customer service, cited by 31% of respondents.[9] Being transferred between departments and repeating the same information is the archetype of that frustration.
Third, and most overlooked, transfers create data fragmentation. When a call moves fromone agent to another, the wrap-up codes, disposition notes, and resolution status become inconsistent. The first agent may mark the call as resolved because they transferred it. The second agent may not log the original call reason. The result is that the plan’s reporting shows two “handled” calls instead of one unresolved member issue.
Many of these transfers are not agent errors. They are routing failures: an IVR that sends a prior authorization status call to a general benefits queue, or a system that cannot identify a member’s preferred language and routes them to an English-only agent by default. These are infrastructure and configuration problems that compound silently across thousands of calls.
Why Legacy QA Cannot See This Problem
Here is where the structural problem becomes clear.
The traditional approach to call center quality assurance, whether run in-house or through an outsourced partner, reviews between 2% and 5% of total interactions. In many operations, the number sits closer to 2%.[10] That means 95% or more of member calls are never evaluated by anyone.
The math alone makes the approach statistically indefensible. A 3% random sample of 800,000 annual calls captures 24,000 interactions. If 232,000 of those calls are repeat contacts, the sample will catch only a small fraction of them, and it will almost never catch the systemic patterns that cause them.
The deeper issue is not just sample size. It is what the QA program is designed to measure. Most legacy QA scorecards evaluate whether an agent followed a script, greeted the member properly, and used compliant language. They do not measure whether the member’s issue was actually resolved, whether the call could have been prevented by better routing, or whether the same question has been asked 500 times this month because a benefit change was poorly communicated.
When quality measurement is limited to agent-level compliance on a tiny sample, the operational problems that drive repeat calls, unnecessary transfers, and member dissatisfaction remain invisible. QA scores can look strong while member experience deteriorates, because the scorecard and the member’s reality are measuring different things.
The Star Ratings Revenue Connection
For Medicare Advantage plans, this is not just an operational inconvenience. It is a revenue problem measured in tens of millions.
CAHPS survey results have historically carried a 4x weight in CMS Star Ratings calculations. While the weighting shifted to 2x for Star Year 2026, CAHPS measures remain a significant driver of overall ratings. CMS’s proposed rules for 2027 and beyond signal that member experience will become an even larger share of the total score, with CAHPS and HOS projected to make up nearly 40% of total Star weight by 2029.[11]
The financial stakes are hard to overstate. The gap between a 3.5-star and a 4+ star plan can translate to tens of millions of dollars in annual quality bonus payments. In 2026, only about 40% of MA-PD contracts achieved 4 stars or higher, the lowest proportion in over five years.[12]
Every repeat call, every unnecessary transfer, every escalation that leaves a member frustrated is a data point that can move CAHPS scores. A plan cannot fix a bad call center experience with a follow-up mailer.
What This Looks Like in Practice
Consider amid-size Medicaid managed care plan handling 800,000 member calls per year. At a 71% FCR rate, roughly 232,000 of those calls require a repeat contact. At $4.90 per call, the repeat volume alone represents more than $1.1 million indirect costs annually, and that does not account for the extended handle times, supervisor escalations, or member complaints those calls generate.
Now suppose the plan’s QA program reviews 3% of calls. That is 24,000 calls reviewed out of 800,000. The 232,000 repeat interactions? They are almost entirely invisible, because repeat calls do not cluster conveniently in a random 3% sample.
The plan sees a QA dashboard that shows 90%+ compliance scores. The quality team reports stable performance. Meanwhile, CAHPS scores are flat or declining, member complaints are rising, and the CX team cannot pinpoint why.
This is not a failure of the people doing the work. It is a failure of the measurement infrastructure. The plan is making decisions based on what 3% of its interactions reveal, while the other 97% contain the signals that actually explain member experience.
Language Access: The Hidden Multiplier
One of the most overlooked drivers of call center inefficiency in health plans is language access. Medicaid and dual-eligible populations frequently include members whose primary language is not English. When these members reach an agent who cannot serve them in their preferred language, the result is almost always a transfer, extended hold time, or an unresolved interaction.
CMS requires that Medicare Advantage and Medicaid managed care plans provide meaningful language access. But compliance is often measured at the policy level, not the interaction level. A plan may have interpreter services available, but if the routing logic does not match members to bilingual agents and QA does not evaluate non-English interactions, language-related service failures become invisible in aggregate metrics.
This matters because the members most affected are often the most vulnerable: elderly, disabled, low-income, or limited English proficient populations whose CAHPS responses carry the same weight as every other member’s. A plan that underserves this segment is not just creating an equity gap. It is creating a Star Ratings exposure that shows up 12 to 18 months later in the measurement cycle.
What Modern Call Center Operations Should Look Like
The answer is not to bring everything in-house or to stop working with operational partners. The answer is to modernize how quality is measured, who owns the data, and what the plan can actually see. Whether your call center is in-house, outsourced, or hybrid, these capabilities separate plans that manage costs from plans that manage outcomes.
100% interaction monitoring, not sampling. Any quality program that evaluates only a fraction of calls will always miss the patterns that drive repeat contacts and member dissatisfaction. AI-powered monitoring across voice, chat, and digital channels is now operationally viable and should be the baseline expectation.
Multilingual QA that matches the member population. If your plan serves Medicaid or Medicare Advantage populations, quality monitoring must cover non-English interactions with the same rigor as English calls. This means native-language evaluation, not post-hoc translation of transcripts.
Plan-owned quality measurement. Regardless of who operates the call center, the plan should own the quality data. When quality measurement is controlled entirely by the team handling the calls, there is no independent check on whether reported performance matches member reality.
Root-cause analytics, not just scorecards. A QA score tells you whether an agent followed a script. It does not tell you why members are calling back, which call types drive the most transfers, or where routing logic is failing. Modern QA surfaces the operational signals behind the numbers.
Direct linkage to CAHPS and Star Ratings strategy. Call center performance and Star Ratings are not separate workstreams. Quality data from member interactions should feed directly into Stars strategy, giving plans the ability to intervene before CAHPS surveys go into the field.
Operational intelligence, not just compliance reporting. The goal is not a cleaner scorecard. It is the ability to see which processes are broken, which member segments are at risk, and which changes will move the metrics that matter.
How Mizzeto Approaches This
Mizzeto’s Multilingual QA Solution was built to give health plans 100% visibility into call center quality across every language their members speak. Rather than relying on sampling or siloed scorecards, the platform uses AI to monitor and score every member interaction, surfacing the compliance risks, service failures, and repeat-call drivers that legacy QA methods cannot detect. Whether your call center is in-house, outsourced, or a combination, Mizzeto puts quality oversight and operational intelligence back in the hands of the plan.
The Cost of Not Knowing
The most expensive call in your contact center is not the one that takes 12 minutes. It is the one that generates three more calls, a formal complaint, and a CAHPS response that pulls your Star Rating below the bonus threshold.
Health plans have spent years optimizing the visible costs: average handle time, headcount, per-call rates. The invisible costs, the ones hiding in the 95% of calls nobody reviews, are where the real money is. The plans that figure this out first will not just run more efficient call centers. They will have a structural advantage in Star Ratings, member retention, and the ability to make operational decisions based on what is actually happening.
The call center is not a cost center to be minimized. It is an intelligence asset to be owned.
SOURCES
[1]DialogHealth, “Latest Healthcare Call Center Statistics,” 2025.
[2]Ameridial, “Health Plan Member Services Outsourcing for Star Ratings,” 2026.
[3]SQM Group, “Why FCR Matters to Healthcare Insurance Call Centers.”
[4]Physicians Angels, “Healthcare Call Center Statistics To Know,” 2025.
[5]Talkdesk, “How Payers Can Improve Member Experience with Modern Contact Centers.”
[6]TheAIQMS, “AI QMS for BPO: Scaling Contact Center Quality Without Expanding QA Teams,” 2025.
[7]Enthu.ai, “Call Center Quality Assurance,” 2026.
[8]Press Ganey, “CMS Just Ignited the Biggest Stars Shake-Up in a Decade,” December 2025.
[9]Oliver Wyman, “How Plans Can Win as Medicare Advantage Star Ratings Change,” 2025.
[10]CAQH, “2025 CAQH Index: U.S. Healthcare Avoided $258 Billion,” February 2026.
[11]CMS, “2026 Star Ratings Fact Sheet,” November 2025.





















