← Reference Deep-Dives Reference Deep-Dive · 14 min read

Customer Effort Score (CES): When Ease Matters More Than Satisfaction

By Kevin, Founder & CEO

A customer contacts support about a billing error. The agent resolves the issue in 8 minutes. The customer receives a CSAT survey and rates the interaction a 5 out of 5 — fully satisfied. The support dashboard turns green. The agent’s performance metrics improve.

But here is what the CSAT score did not capture: before reaching the agent, the customer spent 12 minutes navigating an automated phone tree, was transferred once, had to re-explain the problem after the transfer, and was asked to verify their identity three times. The agent was great. The experience was exhausting. And the next time this customer has an issue, they will think twice about whether it is worth the effort to get it resolved — or whether it is easier to switch to a competitor whose product creates fewer issues in the first place.

This gap between satisfaction and effort is not a measurement edge case. It is a systematic blind spot in how most companies evaluate customer experience. CSAT measures whether customers are happy with the outcome. CES measures how hard they had to work to get there. And a growing body of research demonstrates that effort — not satisfaction — is the stronger predictor of loyalty behavior in many of the interactions that matter most.

What CES Measures and Why Effort Predicts Loyalty


Customer Effort Score was introduced by researchers Matthew Dixon, Karen Freeman, and Nicholas Toman in a 2010 Harvard Business Review article that challenged a fundamental assumption of customer experience strategy. The prevailing wisdom held that companies should aim to “delight” customers — to exceed expectations and create memorable positive experiences that drive loyalty and advocacy. The researchers’ data told a different story.

Analyzing data from more than 75,000 customer interactions across multiple industries, the research team found that exceeding customer expectations had a negligible impact on loyalty. The customers who reported being “delighted” by an interaction were only marginally more loyal than those who reported being merely “satisfied.” But customers who reported high effort — who found the interaction difficult, confusing, or frustrating to complete — were dramatically less loyal. Specifically, 96% of customers who reported high-effort experiences expressed disloyalty intentions (reduced spending, negative word of mouth, or switching), compared to only 9% of low-effort customers.

The implication was counterintuitive: companies should stop trying to exceed expectations and focus instead on removing friction. Loyalty is not won by creating moments of delight. It is lost by creating moments of effort. The strategic priority should not be “How can we make customers happy?” but “How can we make things easy?”

This does not mean satisfaction is irrelevant. It means that satisfaction operates as a hygiene factor — its absence creates dissatisfaction and disloyalty, but its presence does not proportionally create loyalty and advocacy. Effort, by contrast, operates as a loyalty destructor — even customers who are satisfied with the outcome of an interaction will disengage if the process of achieving that outcome was painful.

CES Methodology: How to Measure Effort


The Standard CES Question

The most widely used CES question, refined through multiple iterations since the original research, is:

“To what extent do you agree or disagree with the following statement: [Company name] made it easy for me to handle my issue.”

Measured on a 7-point Likert scale:

  1. Strongly Disagree
  2. Disagree
  3. Somewhat Disagree
  4. Neither Agree nor Disagree
  5. Somewhat Agree
  6. Agree
  7. Strongly Agree

Some organizations invert this scale (1 = Strongly Agree) or use a 1-5 scale, but the 7-point agree-disagree format is the most validated and produces the most reliable results. Whichever format you use, document it clearly — CES benchmarking is nearly impossible when companies use different scale directions and lengths.

Question Design Principles

The CES question should reference the specific interaction, not the overall relationship. “Made it easy to handle my issue” is more actionable than “Makes things easy for me in general.” The specificity anchors the respondent to a concrete experience rather than an abstract impression.

The framing should attribute effort to the company, not the customer. “Made it easy for me” is better than “How much effort did you personally have to put forth?” The company-attribution framing captures the customer’s perception of whether the company created or reduced friction, which is the actionable dimension. The personal-effort framing conflates company-created friction with the inherent difficulty of the task.

Touchpoint Selection

CES is a touchpoint metric, not a relationship metric. It measures the effort associated with a specific interaction, which means you need to select the touchpoints where effort is most relevant and most variable.

High-value CES touchpoints include:

  • Support interactions (phone, chat, email, ticket). This is the most common and most validated CES use case. Post-resolution CES predicts repeat contact, escalation, and churn more reliably than post-resolution CSAT.
  • Onboarding processes. First-experience effort strongly predicts activation and early retention. Customers who find onboarding effortful are significantly less likely to reach value realization, regardless of how “satisfied” they report being.
  • Self-service tasks. Knowledge base, FAQ, account management, billing changes. CES on self-service reveals whether your self-service tools are actually reducing effort or just redirecting it.
  • Checkout and purchase flows. Cart abandonment is often an effort problem, not a pricing problem. CES measured at checkout completion (for those who complete) and at abandonment (for those who do not) reveals the friction points in your conversion path.
  • Account changes. Upgrades, downgrades, cancellations, address changes, payment updates. These are moments of truth where effort directly affects retention decisions.

Low-value CES touchpoints include routine product usage (where satisfaction and feature quality matter more than effort) and marketing interactions (where emotional resonance matters more than ease).

Survey Timing and Distribution

CES surveys should be delivered as close to the interaction as possible — ideally within minutes for digital interactions or within 24 hours for phone or in-person interactions. Unlike CSAT, where a delay produces more reflective responses, CES benefits from immediacy because effort is experiential and fades from memory quickly. A customer who struggled with your phone tree for 15 minutes will recall the frustration clearly if surveyed immediately, but will underreport the effort if surveyed three days later.

The delivery channel should match the interaction channel. In-app interactions should trigger in-app CES surveys. Chat interactions should trigger post-chat surveys. Email support should trigger email surveys. Channel consistency improves response rates and reduces the friction of the survey itself — which is important for a metric that measures friction.

When CES Beats NPS and CSAT


CES is not a replacement for NPS or CSAT — it is a complement that excels in specific contexts where effort is the primary driver of customer behavior.

Support Interactions

This is CES’s strongest use case. Post-support CSAT measures whether the customer is happy with the resolution. Post-support CES measures whether the process of getting the resolution was painful. These are different constructs, and CES is the better predictor of future behavior.

A customer whose issue was resolved satisfactorily but who had to call three times, wait on hold for 40 minutes total, and re-explain the problem each time will score high on CSAT (the issue was fixed) but high-effort on CES. This customer is at significant risk of reducing engagement, switching, or creating negative word of mouth — risks that CSAT alone would not flag.

The operational implications differ too. High CSAT with high effort means your agents are good but your systems are bad. Low CSAT with low effort means your systems are efficient but your agents are not resolving issues effectively. The combination of CSAT and CES tells you where to invest: agent training, system improvement, or both.

Onboarding

New customers evaluating a product are simultaneously excited (they chose this product for a reason) and vulnerable (they have not yet built habits or switching costs). Onboarding effort determines whether excitement converts to commitment or dissipates into regret.

CSAT during onboarding is misleading because new customers are still in the honeymoon period — they want to like the product and will rate satisfaction higher than their experience warrants. CES is more honest because effort is experiential and harder to rationalize away. A customer who spent three hours trying to configure a basic integration may report being “satisfied” (they eventually succeeded), but their effort score will accurately reflect the friction they experienced.

Post-onboarding CES also predicts feature adoption. Customers who find the initial setup effortful are significantly less likely to explore additional features, even features that would deliver substantial value. The effort of the first experience sets an expectation for all future experiences, creating a ceiling on engagement that satisfaction alone would not reveal.

Checkout and Conversion

Cart abandonment research consistently identifies “process was too complicated” and “checkout took too long” as top-five reasons for abandonment. These are effort problems. CSAT cannot measure them because the customers who abandoned did not complete the process and therefore do not receive the survey. CES, deployed at key friction points in the checkout flow (account creation, address entry, payment selection), can identify where effort accumulates before it reaches the abandonment threshold.

For completed checkouts, post-purchase CES predicts repeat purchase behavior more reliably than post-purchase CSAT. A customer who found the purchase easy will return. A customer who found it effortful may not, even if they are satisfied with the product they received.

CES by Channel: Measuring Effort Where It Happens


Effort is not experienced uniformly across channels. The same task — resolving a billing issue, for example — produces very different effort levels depending on whether the customer uses in-app support, calls a phone line, submits an email ticket, or navigates a self-service portal. Channel-specific CES reveals where your experience architecture creates friction and where it eliminates it.

In-App and Digital Channels

Digital channels should theoretically produce the lowest effort scores because they eliminate human interaction delays and allow customers to self-serve. In practice, poorly designed digital experiences can be higher effort than phone support because the customer has no recourse when they get stuck — there is no human to ask for help.

CES for digital channels should be measured both at the task level (did this specific workflow feel easy?) and at the resolution level (was the overall effort to resolve your issue low?). The task-level measurement identifies specific UX friction points. The resolution-level measurement captures the total effort, including channel-switching if the customer started in self-service and ended up calling support.

Phone and Live Support

Phone support CES is shaped by factors before, during, and after the conversation. Pre-conversation effort includes hold times, IVR navigation, and transfers. During-conversation effort includes identity verification, problem re-explanation, and process navigation. Post-conversation effort includes follow-up actions the customer must take (confirming by email, checking back later, calling again if the fix does not work).

The most diagnostic approach: measure CES for the overall interaction and include a follow-up question identifying the highest-effort component. “What part of the experience required the most effort?” with options for wait time, transfers, verification, explanation, and resolution gives you a heat map of where effort concentrates in your support process.

Self-Service

Self-service CES is unique because the alternative to high-effort self-service is contacting support — which means self-service failures directly increase support volume and cost. Measuring CES on self-service completions (the customer found what they needed without contacting support) and on self-service failures (the customer attempted self-service and then contacted support) reveals the actual deflection rate of your self-service tools.

The deflection rate, adjusted for effort, is more meaningful than the raw deflection rate. A knowledge base that deflects 60% of potential support contacts but requires high effort for those 60% is not a good self-service experience — it is a different kind of friction that erodes loyalty even as it reduces support costs.

Email and Asynchronous Channels

Email support CES is complicated by the asynchronous nature of the interaction. Effort accumulates over time: writing the initial email, waiting for a response, reading and interpreting the response, writing a follow-up, waiting again. The total effort can be high even if each individual interaction is smooth.

Measure CES at ticket resolution, not at each response. The customer does not experience each email as a separate interaction — they experience the entire exchange as a single effort to resolve their issue. The resolution-level CES captures the cumulative effort that individual touchpoint measurements would miss.

CES Limitations: What Effort Cannot Tell You


CES is powerful but not comprehensive. Understanding its limitations is essential for using it effectively.

Effort Is Not Emotion

CES measures friction but not feeling. A low-effort experience can still be emotionally negative. A customer who files a warranty claim through a smooth, easy process may still feel disappointed that the product broke. A patient who books an appointment through a seamless online portal may still feel anxious about the medical issue. CES would report these experiences as low-effort successes, missing the emotional dimension that shapes overall loyalty.

This is why CES should complement, not replace, satisfaction measurement. CES tells you whether you are creating friction. CSAT tells you whether the emotional experience is positive. NPS tells you whether the overall relationship generates advocacy. Each metric captures a different dimension of the customer experience.

CES Does Not Capture Delight

The original CES research argued that delight has a negligible impact on loyalty. This claim is more nuanced than often reported. Delight has a negligible impact on loyalty in the context of service recovery and support interactions — the contexts the researchers studied most extensively. In other contexts — product experiences, brand interactions, community engagement — delight can be a meaningful loyalty driver.

A product that surprises and delights users with an unexpectedly elegant feature creates advocacy that CES cannot capture. A brand that builds emotional connection through values alignment creates loyalty that effort reduction alone would never produce. CES is optimized for service and process contexts. In product and brand contexts, satisfaction and emotional metrics are more informative.

CES Is Touchpoint-Specific

CES measures effort at specific touchpoints, which makes it operationally actionable but strategically incomplete. You can aggregate CES across touchpoints to create an “average effort” score, but this average is misleading because effort at different touchpoints has different loyalty implications. High effort during onboarding has a larger retention impact than high effort during a routine account update, but an average CES treats them equally.

The strategic view requires mapping CES to the customer journey and weighting effort scores by the touchpoint’s importance to the overall relationship. This is a more complex analysis than simply reporting a single CES number, but it produces a much more useful picture of where effort reduction will have the greatest impact on loyalty.

Combining CES + NPS + CSAT: The Complete Measurement System


The most effective customer experience measurement programs do not choose between CES, NPS, and CSAT — they deploy each metric where it is strongest and combine the results into a multi-dimensional view of customer health.

The Three-Metric Framework

NPS measures relationship health and referral likelihood. Deploy it periodically (quarterly) across your full customer base. Use it for strategic benchmarking, segment analysis, and as a leading indicator of retention and expansion.

CSAT measures satisfaction with specific experiences or the overall relationship. Deploy it after important interactions and periodically for relationship assessment. Use it for dimensional analysis (which aspects of the experience drive or detract from satisfaction) and for tracking the impact of experience improvements.

CES measures effort at critical touchpoints. Deploy it after support interactions, onboarding steps, self-service tasks, and any interaction where friction is likely. Use it for operational optimization and for identifying the specific process failures that erode loyalty.

Cross-Metric Analysis

The most valuable insights come from analyzing the intersections between metrics.

High NPS + High CES (customers love you but find interactions effortful): Your product and brand are strong but your service and processes are creating unnecessary friction. Invest in operational improvement — you have goodwill to draw on, but you are depleting it with every high-effort interaction.

Low NPS + Low CES (customers are not enthusiastic but interactions are easy): Your operations are smooth but your product or brand is not inspiring loyalty. Effort reduction has reached diminishing returns. Invest in product differentiation and brand building.

High CSAT + High CES (customers are satisfied but it was hard): This is the resolution-relief pattern described earlier. Customers appreciate the outcome but resent the process. The satisfaction is real but fragile — one more high-effort interaction may tip them into dissatisfaction. Prioritize process improvement before the goodwill erodes.

Low CSAT + Low CES (customers are dissatisfied even though it was easy): The problem is not effort — it is outcome quality. Your processes are efficient but your product or service is not meeting expectations. Process optimization will not help here. Product and service quality improvements are needed.

Interview-Based Follow-Up Across All Three Metrics

Each metric identifies a signal. None of them explains the cause. A customer who reports high effort after a support interaction has told you that something was hard — but you do not know whether it was the hold time, the transfer, the identity verification, the agent’s communication, or the resolution itself. The open-ended follow-up field (“What would have made this easier?”) captures some explanation, but written responses tend to be brief and surface-level.

Structured follow-up interviews with customers across score bands provide the depth that surveys cannot. When you interview 50 customers who reported high effort in support interactions, you discover that the effort is not distributed evenly across the support process — it concentrates at two specific points (IVR navigation and post-resolution confirmation steps) that account for 70% of the perceived effort. That specificity transforms a vague “improve the support experience” initiative into a targeted “redesign the IVR and eliminate unnecessary confirmation steps” project with clear scope and measurable outcomes.

Similarly, interviewing customers who are NPS passives but report low effort reveals a different pattern: these customers find your product easy to use but not valuable enough to recommend. The effort is low, the satisfaction is adequate, but the product has not created the kind of impact that generates advocacy. This is a product-market fit insight that no single metric could have surfaced alone — it required the intersection of NPS (low advocacy), CES (low effort), and qualitative follow-up (the product is easy but not important enough).

Making CES Actionable: From Score to Improvement


CES data becomes valuable when it drives specific operational changes. The action framework has four steps.

Step 1: Identify high-effort touchpoints. Rank all measured touchpoints by CES score. The touchpoints with the highest reported effort are your priority targets — but weight them by volume and strategic importance. A high-effort touchpoint that 10% of customers encounter matters less than a moderate-effort touchpoint that 80% encounter.

Step 2: Diagnose effort sources. For each priority touchpoint, determine where effort concentrates. Use your follow-up data (open-ended responses and interview findings) to categorize effort into specific sources: wait time, transfers, repeat contacts, process complexity, information gaps, channel switching, identity verification, and unclear resolution.

Step 3: Redesign for ease. For each identified effort source, design a specific intervention. High wait times may require staffing changes or callback options. Unnecessary transfers may require routing logic improvement. Repeat contacts may require first-contact resolution training. Process complexity may require workflow redesign. Each intervention should have a clear hypothesis about how much effort it will reduce.

Step 4: Measure the impact. After implementing changes, compare CES at the affected touchpoints to the pre-intervention baseline. If the intervention worked, CES should decrease (effort should drop) and downstream metrics — repeat contact rate, escalation rate, churn rate — should improve. If CES does not move, the intervention addressed the wrong effort source, and the diagnostic step needs to be repeated.

The companies that extract the most value from CES are the ones that treat it as an operational metric with a continuous improvement loop, not as a strategic metric reported quarterly. CES tells you where the friction is right now, and the friction is always changing as products evolve, customer expectations shift, and processes degrade or improve. The measurement and improvement cycle should run continuously, with each cycle producing specific, measurable effort reductions that compound into a meaningfully easier customer experience over time.

Reducing effort will not make customers love you. But it will remove the reasons they leave. And in most markets, preventing exits matters more than generating enthusiasm — because the customers who stay long enough and find things easy enough will eventually build the habits, dependencies, and organizational commitments that make loyalty self-sustaining.

Frequently Asked Questions

CES outperforms NPS and CSAT specifically at high-friction touchpoints — support interactions, onboarding steps, and self-service tasks — where effort is the dominant variable influencing whether a customer continues or churns. At emotionally charged touchpoints like major product failures or executive relationship moments, NPS and CSAT capture dimensions that CES misses entirely.
CES measurement should be triggered immediately after the specific interaction it's evaluating — post-chat, post-call, or post-ticket-close for support; post-feature-use for product interactions. A single aggregate CES score obscures which specific channels or task types are creating friction, making it impossible to prioritize where effort reduction investments will have the biggest retention impact.
CES tells you the effort level customers experienced but not why that effort existed or what specifically made the interaction feel hard. Qualitative interviews following low CES scores surface the specific friction sources — confusing UI, missing documentation, policy constraints — that quantitative scores alone cannot diagnose.
User Intuition's AI-moderated interviews can be triggered after low-CES interactions to diagnose the root causes of high effort, converting a score into a diagnostic. At $20 per interview with 48-72 hour turnaround, teams can run follow-up qualitative research at a scale that makes it standard practice rather than an occasional deep dive.
The strongest measurement systems use all three in context: CES at transactional touchpoints, CSAT for recent interaction quality, and NPS for overall relationship health. Each metric captures a different dimension of the customer experience, and using only one creates blind spots that compound over time into misdiagnosed retention problems.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours