Your NPS is 32. Is that good?
If you run a telecom company, that score would place you among the industry’s elite performers. If you sell direct-to-consumer products online, it would put you below the median. And if you lead a SaaS company, it could mean anything from “doing fine” to “actively losing ground to competitors” depending on your market segment, customer mix, and measurement methodology.
This is the fundamental problem with NPS benchmarking: the number itself is almost meaningless without context. Yet teams across every industry fixate on their score, compare it to vague “industry averages” pulled from third-party reports with questionable methodologies, and either celebrate false victories or panic over phantom crises.
The reality is more nuanced and more useful than any benchmark table can capture. This guide breaks down what NPS scores actually look like across major industries, explains why those numbers differ, and — most importantly — shows you why understanding the drivers behind your specific score matters far more than knowing where you rank.
The Problem with “Average” NPS Comparisons
Before diving into industry benchmarks, it is worth understanding why most published NPS data should be treated with skepticism.
First, there is a methodology problem. NPS surveys can be delivered via email, in-app, SMS, phone, or embedded in support flows. Each channel produces different response rates and systematically different scores. In-app surveys typically generate scores 5-10 points higher than email surveys because they reach users who are actively engaged. Companies reporting high NPS numbers are sometimes just using a channel that selects for happier respondents.
Second, there is a timing problem. Relationship NPS (sent periodically to all customers) and transactional NPS (triggered after specific interactions) produce very different numbers. A company measuring NPS immediately after a successful onboarding session will report dramatically higher scores than one surveying the full customer base quarterly. Most benchmark reports do not control for this distinction.
Third, there is a sample bias problem. Response rates for NPS surveys typically range from 10-30%. The people who respond are not representative of your full customer base. Research consistently shows that satisfied customers and highly dissatisfied customers are overrepresented in NPS responses, while the ambivalent middle — often your largest and most strategically important segment — stays silent. A study by CustomerGauge found that non-respondents churn at nearly twice the rate of respondents, which means your NPS score is telling you about the sentiment of people who are already more engaged than average.
All of this means that the published benchmark you found in a consulting report should be treated as a rough directional indicator, not a precise measurement. With that caveat, here is what the data actually shows.
NPS Benchmarks by Industry: The 2026 Landscape
SaaS and Software: 30-50
The SaaS industry sits in a broad band because the category is enormous and diverse. Infrastructure software companies (AWS, Cloudflare) tend to cluster at the lower end of this range — their products are deeply embedded and users have high switching costs, which means even loyal customers express less enthusiasm. Product-led growth companies (Notion, Figma) tend to score at the higher end because their products are designed to generate delight and their users chose the product voluntarily rather than having it mandated by IT.
The most important SaaS NPS nuance: scores vary dramatically by plan tier. Enterprise customers — who often did not choose the product themselves and interact with it through heavily customized implementations — typically score 10-20 points lower than individual or team-plan users. If you are averaging across tiers, your headline number is masking two very different customer experiences.
The SaaS NPS challenge is that passives (scores of 7-8) represent a uniquely dangerous segment. These customers are satisfied enough to continue auto-renewing but not enthusiastic enough to expand their usage or advocate for the product internally. In a subscription business, passive satisfaction is the precursor to competitive displacement.
E-Commerce and Direct-to-Consumer: 40-60
E-commerce consistently reports the highest NPS scores across industries, and the reason is structural: customers who have a bad experience simply do not come back. Natural attrition removes dissatisfied customers from the surveyed population, inflating scores. The customers who remain — and who respond to surveys — are disproportionately happy.
Top-performing DTC brands (often cited examples include Chewy, Warby Parker, and select fashion brands) report scores in the 60-80 range, but these numbers need context. These brands have cultivated intensely loyal customer bases through exceptional service and community building, but they are also measuring within a self-selected population that excludes everyone who tried the brand once and left.
For e-commerce companies, the more revealing metric is NPS among first-time buyers versus repeat buyers. First-purchase NPS tells you about acquisition quality and initial experience. Repeat-purchase NPS tells you about product quality and relationship depth. The gap between these two numbers is a leading indicator of long-term retention trajectory.
Healthcare: 10-30
Healthcare NPS scores are structurally depressed by factors outside any individual provider’s control. Patients often do not choose their healthcare provider — insurance networks, geography, and referral patterns dictate the relationship. The experience itself is inherently stressful (nobody wants to be a healthcare customer). And the complexity of billing, insurance coordination, and care navigation creates friction that even excellent clinical care cannot fully offset.
Within healthcare, the variance is enormous. Elective and consumer-facing services (dental practices, dermatology clinics, telehealth platforms) regularly score 40-60 because patients chose the provider and the experience is closer to a consumer transaction. Hospital systems and large health networks — dealing with emergency care, chronic disease management, and insurance complexity — typically score 10-25 even with objectively good clinical outcomes.
The healthcare NPS insight that matters most: clinical quality and NPS are only loosely correlated. Studies show that communication quality, wait times, and administrative ease have a larger impact on NPS than clinical outcomes. This does not mean clinical quality is unimportant — it means NPS in healthcare measures the experience of being a patient, not the quality of medical care received.
Financial Services: 20-40
Financial services NPS reflects a fundamental tension: customers depend on these institutions but rarely feel enthusiastic about them. Banks, insurance companies, and wealth management firms manage critical aspects of people’s lives, but the nature of the relationship — built on trust, compliance, and risk management — does not lend itself to the kind of delight that drives high NPS scores.
Digital-first financial services companies (neobanks, fintech platforms) consistently outperform traditional institutions by 15-25 points, but this comparison is misleading. Digital challengers serve younger, more tech-savvy demographics who self-selected into the product. Traditional banks serve everyone, including customers who are there because switching is painful, not because they are enthusiastic.
The most informative financial services NPS segmentation is by customer tenure. New customers (first year) typically score highest — they are in the honeymoon period. Mid-tenure customers (2-5 years) often show a significant dip as initial enthusiasm fades and the relationship becomes transactional. Long-tenure customers (5+ years) either stabilize at a moderate score or bifurcate into loyal advocates and trapped detractors who stay only because switching costs are prohibitive.
Telecom: -5 to 15
Telecom has earned its position at the bottom of NPS benchmarks through decades of structural factors that depress customer satisfaction. Contract lock-ins create resentment. Service quality varies by geography. Pricing is deliberately complex. And the product itself — connectivity — is invisible when it works and infuriating when it does not.
A telecom company with an NPS of 15-20 is genuinely outperforming its peers. T-Mobile’s well-documented NPS improvement from single digits to the 30s over several years was driven by specific, customer-facing policy changes (eliminating contracts, simplifying pricing, improving customer service) that addressed the structural drivers of dissatisfaction. The lesson: even in low-NPS industries, dramatic improvement is possible when you identify and address the specific experience failures that drive detraction.
B2B Professional Services: 30-50
Consulting firms, agencies, and professional services companies occupy a similar band to SaaS, but the drivers are entirely different. B2B services NPS is heavily influenced by the personal relationship between the client and their primary contact. A single strong relationship manager can elevate an otherwise mediocre service experience to promoter territory, while a personality mismatch or communication failure can turn objectively good work into a detractor score.
This makes B2B services NPS uniquely volatile at the individual account level. Companies in this space need to track NPS at the account and relationship-manager level, not just in aggregate, because the aggregate number masks the account-level dynamics that actually drive retention and expansion.
What Drives Benchmark Differences: The Three Structural Factors
Understanding why benchmarks differ across industries is more valuable than knowing the benchmarks themselves, because it helps you interpret your own score in context.
Switching Costs
Industries with high switching costs (telecom, enterprise software, banking) systematically produce lower NPS scores. This seems counterintuitive — shouldn’t locked-in customers be more neutral rather than more negative? The explanation is that high switching costs keep dissatisfied customers in the survey population. In e-commerce, unhappy customers disappear. In telecom, they stay and express their frustration through survey scores. Your NPS reflects the sentiment of people who are still your customers, and in high-switching-cost industries, that includes a lot of people who would leave if they could.
Competition Intensity
Highly competitive markets push companies to deliver better experiences, which raises the baseline for what customers consider “normal.” In e-commerce, where switching to a competitor is one click away, companies have invested heavily in experience quality because they have no other choice. In healthcare, where competition is constrained by networks and geography, the competitive pressure to improve experience is weaker, and customer expectations are correspondingly lower.
Expectation Baselines
Customer expectations are shaped by cumulative industry experience. People expect less from their cable provider because cable providers have historically delivered poor experiences. They expect more from consumer technology because Apple, Google, and Amazon have set a high bar. Your NPS is measured against these ingrained expectations, not against some abstract standard of service quality. A telecom company providing genuinely good service might still score below a mediocre SaaS product because the expectation baseline is different.
Internal Benchmarking: The Comparisons That Actually Matter
The most strategically useful NPS comparisons are not against industry averages but against your own data, segmented in ways that reveal actionable patterns.
Segment-Level Comparison
Compare NPS across customer segments: by plan tier, by use case, by company size, by geography, by acquisition channel. The segments with the highest and lowest scores are not random — they reflect real differences in how well your product or service fits different customer needs. A 20-point gap between your enterprise segment and your SMB segment tells you something specific about your product-market fit at different scales.
Cohort Analysis
Track NPS by customer cohort (grouped by when they signed up). If recent cohorts score lower than older cohorts at the same tenure point, something in your product or onboarding has degraded. If recent cohorts score higher, your improvements are working. Cohort analysis controls for the natural evolution of customer sentiment over time and isolates the impact of changes you have made.
Time-Period Trends
Quarter-over-quarter NPS trends matter more than the absolute number. A score of 35 that has improved from 28 over four quarters tells a very different story than a 35 that has declined from 42. The trajectory reveals whether your investments are working, while the absolute number tells you very little without competitive context.
Touchpoint Comparison
If you measure transactional NPS across different touchpoints (onboarding, support, billing, renewal), the relative scores reveal where your experience breaks down. A company with high onboarding NPS and low support NPS has a different problem set than one with low onboarding NPS and high support NPS, even if their relationship NPS is identical.
Why Driver Analysis Matters More Than Benchmarks
Here is the uncomfortable truth about NPS benchmarking: knowing your score relative to competitors does not tell you how to improve it. A company that learns it scores 10 points below the industry median knows it has a problem but has no idea what to do about it. The score does not reveal whether the gap is driven by product quality, service experience, pricing perception, brand trust, or some combination of all four.
This is where the gap between measurement and understanding becomes critical. NPS surveys capture sentiment. Follow-up interviews capture causation. The difference between companies that improve their NPS and companies that stagnate is not better benchmarking — it is better driver analysis through qualitative follow-up.
Statistical driver analysis (regression modeling against operational data) can identify correlations: customers who contact support score lower, customers who use Feature X score higher. But correlation is not causation. Customers who contact support might score lower because support is bad, or because they had a problem that soured them on the product regardless of how support handled it. The only way to distinguish between these explanations is to ask customers directly — and to ask them with enough depth to get past surface-level responses.
Consider a real scenario: a B2B SaaS company discovers that its NPS among mid-market customers (100-500 employees) is 15 points lower than among small business customers. Statistical analysis shows correlations with implementation complexity, time-to-value, and feature usage breadth. But which of these is causal? Should the company invest in simplifying implementation, accelerating time-to-value, or improving feature adoption?
Follow-up interviews with 150 customers across both segments reveal something the data could not: mid-market customers feel over-served by features they do not need and under-served by the specific workflows they care about. Their lower NPS is not about implementation complexity — it is about product-market fit at their scale. The score gives you the destination (something is wrong with mid-market). The interviews give you the map (the product is too broad and not deep enough for their specific needs).
Building a Benchmarking Practice That Drives Action
If you are going to benchmark NPS — and you should — here is a framework that produces insight rather than anxiety.
Step 1: Establish your internal baseline. Before looking at any external data, measure your NPS consistently for at least four quarters. Use the same methodology, survey population, and timing each quarter. This gives you a reliable baseline against which to measure improvement.
Step 2: Segment ruthlessly. Break your NPS down by every meaningful dimension: customer segment, plan tier, tenure, geography, use case, acquisition channel. The aggregate number is the least useful cut of your data. Identify which segments are strongest and which are weakest.
Step 3: Identify your competitive set. Industry averages are too broad. You do not compete with “SaaS” — you compete with 3-5 specific companies for a specific customer type. Benchmark against them if possible (through win-loss analysis, competitive research, or public data) rather than against the entire industry.
Step 4: Diagnose drivers, not just scores. For every segment where your NPS is notably high or low, conduct qualitative follow-up to understand why. AI-moderated follow-up interviews can reach 200+ customers within 48 hours, giving you the qualitative depth to understand what drives each score without waiting weeks for traditional research.
Step 5: Track improvement against yourself. Set improvement targets based on your own trajectory, not on reaching an industry benchmark. A realistic goal is 3-5 points of NPS improvement per quarter in specific segments where you have identified and addressed the root causes. Compounding quarterly improvements will eventually move your aggregate number, but the aggregate is the output, not the target.
The Benchmark Trap
The biggest risk of NPS benchmarking is not getting the wrong number — it is letting the number substitute for understanding. Teams that fixate on their score relative to competitors often fall into two traps.
The complacency trap: “Our NPS is above the industry median, so we are doing fine.” This ignores the possibility that your score is high because of structural advantages (switching costs, market position) rather than genuine customer satisfaction, and that a competitor is closing the gap by addressing drivers you have not even identified.
The panic trap: “Our NPS dropped 5 points this quarter — something is very wrong.” Maybe. Or maybe your survey sample shifted, or you launched a new product that attracted a different customer type, or a competitor’s marketing raised expectations without you doing anything differently. Without understanding the drivers, reacting to score movements is guessing.
The companies that build the strongest customer loyalty programs are the ones that treat NPS as a starting point for inquiry rather than an endpoint for measurement. They measure consistently, segment aggressively, and invest their energy not in chasing a benchmark number but in understanding — through direct conversation with customers — what actually drives satisfaction, loyalty, and advocacy in their specific context.
Your NPS is 32. Whether that is good depends on your industry, your segment, your trajectory, and your competitive set. But whether you improve from 32 depends on something benchmarks cannot tell you: whether you understand why it is 32 in the first place.