Churn Benchmarks 2025: What 'Good' Looks Like by Segment

Industry churn benchmarks reveal why context matters more than averages—and how leading teams reframe the question.

The question arrives in every board meeting, every investor update, every strategic planning session: "What should our churn rate be?" Teams search for the magic number that separates success from struggle, the benchmark that validates their retention performance or signals urgent intervention.

The uncomfortable truth is that this question, as commonly framed, leads teams astray. Churn benchmarks matter—but not in the way most organizations use them. Our analysis of retention data across hundreds of B2B and B2C companies reveals that the distance between your churn rate and industry averages tells you less than you think, while the gap between your current performance and your segment-specific potential reveals everything.

The difference isn't semantic. It changes what you measure, how you interpret results, and where you invest to improve retention.

Why Simple Averages Mislead

Industry reports publish clean numbers: "SaaS companies average 5% monthly churn" or "Consumer subscription services see 7% monthly churn." These figures provide comfort—a reference point suggesting whether you're winning or losing. The problem emerges when teams treat these averages as targets without understanding the distributions beneath them.

Recent analysis by ChartMogul examining over 2,100 SaaS companies found that median monthly churn rates cluster around 3-5% for B2B companies, but the interquartile range spans from 1.5% to 9%. This isn't measurement error or data quality issues. The variance reflects fundamental structural differences in business models that make direct comparisons meaningless.

Consider two software companies, both categorized as "B2B SaaS" in benchmark reports. Company A sells enterprise workflow automation with $50,000 annual contracts, 18-month implementation cycles, and deep integration into customer operations. Company B offers a $49/month social media scheduling tool with 10-minute setup and minimal switching costs. Both face churn, but the mechanics, acceptable ranges, and improvement levers differ completely.

When Company B's leadership sees their 6% monthly churn and compares it to the "5% SaaS average," they might conclude performance is acceptable. Meanwhile, companies in their specific segment—low-touch, low-ACV tools with minimal integration—often achieve 3-4% monthly churn through focused onboarding and engagement programs. The benchmark provided false comfort while masking a significant opportunity gap.

The Segments That Actually Matter

Useful benchmarks require segmentation that reflects the structural drivers of retention. Our research identifies six dimensions that fundamentally alter what "good" churn looks like, based on analysis of retention patterns across diverse business models.

Contract value creates the first major division. For B2B companies, the relationship between ACV and churn follows a clear pattern. Products with annual contract values below $5,000 typically see monthly churn between 3-7%, while those above $50,000 consistently achieve below 2% monthly churn. The mechanism isn't mysterious—higher-value contracts attract more scrutiny during purchase, involve more stakeholders, and justify greater implementation investment, all of which increase switching friction.

Pacific Crest's SaaS survey data reinforces this pattern. Companies with ACV above $100,000 report median annual churn of 6-8%, translating to roughly 0.5-0.7% monthly. Mid-market products ($5,000-$50,000 ACV) cluster around 10-15% annually, while SMB-focused products ($1,000-$5,000 ACV) often see 20-40% annual churn. These aren't performance differences—they're structural realities of different market segments.

Implementation complexity amplifies these effects. Products requiring professional services, custom integration, or significant configuration investment demonstrate materially lower churn regardless of contract value. A $10,000/year product with three months of implementation work will typically retain better than a $25,000/year product with instant activation. The invested time and effort create psychological commitment and practical switching costs that pure pricing cannot match.

Customer segment introduces another layer of variation. B2B products serving enterprise buyers (500+ employees) consistently show annual churn below 10%, often in the 5-7% range. Mid-market products (50-500 employees) typically land between 10-20% annually. SMB-focused solutions (under 50 employees) frequently see 25-40% annual churn, with some categories exceeding 50%.

These differences reflect business mortality rates as much as product satisfaction. Small businesses fail at higher rates than large enterprises. A product serving restaurants—an industry with 60% failure rates in the first three years—will inherently face higher churn than one serving hospitals, regardless of product quality. Benchmark comparisons that ignore customer segment stability mislead teams into solving the wrong problems.

Usage frequency creates distinct retention profiles in consumer products. Daily-use applications—meditation apps, habit trackers, language learning—face monthly churn between 5-10% even when functioning well, because sustained behavior change is difficult. Weekly-use products like meal planning or workout apps typically see 8-15% monthly churn. Monthly-use products such as budgeting tools or photo storage often achieve 3-7% monthly churn, as less frequent interaction reduces opportunities for disengagement.

The pattern reverses intuition. Products used less frequently can demonstrate better retention because they avoid the engagement treadmill that exhausts users of daily-use apps. A photo backup service used monthly maintains value without demanding attention. A meditation app used daily must continuously deliver motivation and variety to prevent abandonment.

Market maturity shapes retention expectations in ways that benchmark reports rarely acknowledge. Products in emerging categories face higher churn as early adopters experiment and the market sorts out use cases. Established categories with defined buyer expectations and competitive alternatives often show better retention because customers understand what they're buying and why they need it.

Contract structure creates the final major division. Month-to-month subscriptions naturally produce higher churn than annual contracts, but the magnitude varies by segment. B2B products might see 2x higher churn on monthly vs. annual terms, while consumer products often experience 3-4x differences. Freemium models introduce another layer, with paid conversion rates and subsequent retention both requiring separate benchmarking.

B2B SaaS: Benchmarks by Business Model

Within B2B software, the most meaningful segmentation combines contract value with customer size and implementation complexity. These factors interact to create distinct retention profiles that make cross-segment comparisons unhelpful.

Enterprise SaaS products (serving companies with 500+ employees, ACV above $50,000) demonstrate the strongest retention. Best-in-class companies in this segment achieve annual churn below 5%, with median performers around 6-10%. Monthly churn typically stays under 1%, often in the 0.5-0.8% range. These figures reflect long sales cycles, extensive evaluation, executive sponsorship, and deep integration into business operations.

Products in this segment with annual churn above 12% face fundamental issues—poor product-market fit, weak implementation, or misaligned buyer personas. The structural advantages of enterprise sales should naturally produce strong retention. When they don't, the problem usually predates the renewal conversation.

Mid-market B2B products (serving 50-500 employee companies, ACV $5,000-$50,000) operate in a more challenging retention environment. Strong performers achieve 8-12% annual churn, with average companies seeing 15-20%. Monthly churn in the 1-2% range represents solid performance for this segment.

The mid-market faces unique pressures. Buyers have sophisticated needs but limited resources for implementation and change management. They evaluate purchases carefully but lack the organizational inertia that keeps enterprise customers sticky. Products succeed here through exceptional onboarding, clear value demonstration, and solving problems important enough to maintain budget priority during planning cycles.

SMB-focused B2B products (serving companies under 50 employees, ACV under $5,000) confront the highest structural churn. Best-performing products in this segment achieve 20-25% annual churn, with 30-40% being more typical. Monthly churn of 2-3% represents strong performance, while 4-5% suggests opportunities for improvement without indicating fundamental problems.

The challenge isn't product quality—it's customer stability. Small businesses fail, get acquired, pivot, and face cash crunches. They buy quickly with less evaluation but also cancel quickly when priorities shift. Products serving this segment require different retention strategies: faster time-to-value, simpler implementation, and features that demonstrate impact quickly enough to secure budget in the next planning cycle.

Vertical SaaS products serving specific industries often achieve retention 20-30% better than horizontal products at similar price points. A $10,000/year construction management platform might see 12% annual churn while a $10,000/year project management tool sees 18%. Industry specificity creates switching costs through specialized workflows, terminology, and integrations that general-purpose tools cannot match.

Consumer Subscriptions: When Engagement Drives Everything

Consumer subscription benchmarks require different segmentation because the retention drivers differ fundamentally from B2B. Price matters, but engagement patterns, content refresh rates, and habit formation often matter more.

Streaming entertainment services demonstrate how content investment shapes retention. Netflix reports annual churn around 25-30% in mature markets, translating to roughly 2-2.5% monthly. Smaller services with less content investment often see 40-50% annual churn. The difference isn't pricing—it's the breadth of content that keeps subscribers finding something worth watching each month.

This creates a challenging dynamic. Services need scale to fund content that drives retention, but achieving scale requires surviving the early years when limited content produces high churn. Successful services either launch with substantial content libraries or find niches where depth matters more than breadth.

Fitness and wellness apps face different retention mechanics. Products requiring daily engagement—meditation apps, habit trackers, workout programs—typically see 50-70% annual churn even when well-executed. Monthly churn of 5-8% represents strong performance in this category. The challenge isn't product quality but sustained behavior change, which most people find difficult regardless of tool quality.

Products that reduce engagement requirements often improve retention paradoxically. A workout app demanding daily 45-minute sessions faces higher churn than one requiring 15-minute sessions three times weekly, even if the longer workouts produce better fitness outcomes. Retention optimization in consumer products often means designing for realistic human behavior rather than ideal outcomes.

Utility subscriptions—cloud storage, password managers, VPN services—achieve the best consumer retention when they become invisible infrastructure. Best-performing products in this category see 20-30% annual churn, with monthly churn around 2-3%. These products succeed by solving problems that don't go away and requiring minimal ongoing engagement.

The retention pattern differs from B2B utility products because consumer switching costs are lower and price sensitivity is higher. A consumer paying $10/month for cloud storage will churn for a $7 alternative, while a business paying $50/seat rarely churns over $10 differences. Consumer utility products must continuously justify value because alternatives are always one search away.

News and information subscriptions occupy a middle ground. Digital news subscriptions typically see 30-40% annual churn, with monthly rates around 3-4%. Newsletter subscriptions often experience higher churn, 40-60% annually, because the commitment feels lighter and alternatives abound. Premium research or specialized information services can achieve 20-25% annual churn when serving professional audiences with clear ROI.

The Metrics That Matter More Than Averages

Understanding your position relative to segment benchmarks provides context, but three other metrics reveal more about retention health and improvement potential. These measures expose problems that simple churn rates mask and opportunities that benchmark comparisons miss.

Cohort retention curves show how churn evolves as customers mature. Plot retention by customer cohort month-over-month, and healthy businesses show a curve that flattens over time. Month one might see 10% churn, month two 7%, month three 5%, stabilizing around 2-3% by month six. This pattern indicates that onboarding works, value becomes clear, and customers who stay develop stickiness.

Problematic businesses show different patterns. Flat cohort curves—where month one and month twelve show similar churn rates—suggest that customers never develop increased stickiness. Rising cohort curves, where churn accelerates over time, indicate value decay or better alternatives emerging. Both patterns demand different interventions than simple churn rate comparisons would suggest.

Our analysis of retention patterns across diverse business models reveals that companies with strongly declining cohort curves (churn dropping 60%+ from month one to month six) typically achieve retention 25-40% better than segment averages. The cohort shape predicts retention potential better than current churn rates because it reveals whether the business model creates increasing stickiness.

Churn reason distribution matters more than aggregate rates for improvement prioritization. A company with 5% monthly churn driven primarily by product gaps faces different opportunities than one with 5% churn driven by poor onboarding or pricing concerns. The aggregate number hides the improvement levers.

Leading retention teams track churn reasons systematically through exit surveys and interviews. They discover that 60-70% of churn typically concentrates in 2-3 primary reasons, making focused improvement possible. A product with 6% monthly churn might find that 4% stems from onboarding failures—meaning better onboarding could reduce total churn by two-thirds.

The challenge is getting honest churn reasons. Exit surveys completed by 15-20% of churning customers provide biased samples. Voluntary responses skew toward extreme experiences—either very negative or very positive—missing the middle majority who leave quietly. This is where AI-powered research platforms like User Intuition's churn analysis create advantages, conducting natural conversations with churned customers that surface honest reasons at scale rather than relying on voluntary survey responses.

Revenue retention versus logo retention reveals whether churn concentrates in small or large customers. A company might report 10% annual logo churn but 95% net revenue retention, indicating that churned customers were small while remaining customers expanded. Alternatively, 10% logo churn with 85% net revenue retention suggests larger customers are leaving, a far more concerning pattern.

B2B companies should track both metrics separately and understand the distribution. Enterprise-focused products typically see lower logo churn (5-8% annually) but should achieve 100%+ net revenue retention through expansion. Mid-market products might accept 12-15% logo churn if they maintain 90%+ net revenue retention through expansion in remaining accounts. SMB products often see 30%+ logo churn but can still build sustainable businesses if revenue retention stays above 80% through cohort expansion.

Early Warning Signals That Predict Churn

Benchmark comparisons tell you whether current churn is acceptable. Leading indicators tell you whether it's about to get worse. The most valuable metrics predict future churn early enough to intervene, creating opportunities to improve retention before customers reach the cancellation decision.

Usage decline precedes churn by weeks or months in most products. Customers don't suddenly cancel—they gradually disengage first. A customer who used your product 20 times last month and 8 times this month is signaling future churn regardless of current satisfaction. The decline pattern matters more than absolute usage levels.

Products that track weekly active users, daily active users, or feature engagement can build predictive models identifying at-risk customers. Research by Totango found that customers who decrease usage by 40%+ over two consecutive periods churn at 3-4x higher rates than stable users. The specific threshold varies by product, but the pattern holds across categories.

Time-to-value achievement predicts long-term retention better than early satisfaction scores. Customers who reach meaningful value milestones within the first 30 days retain at 40-60% higher rates than those who don't, according to analysis by Gainsight. The milestone varies by product—first report generated, first project completed, first team member added—but hitting it early separates customers who stay from those who churn.

This insight changes onboarding strategy. Rather than optimizing for feature adoption or engagement metrics, leading products optimize for the specific milestone that predicts retention. A project management tool might focus obsessively on getting customers to complete their first project within 7 days, knowing that customers who hit this milestone churn at 1.5% monthly while those who don't churn at 6% monthly.

Support ticket patterns reveal brewing dissatisfaction before customers cancel. Customers who submit multiple tickets about the same issue churn at 2-3x normal rates. Customers who receive slow responses or unhelpful resolutions churn at even higher rates. The correlation isn't subtle—support experience predicts retention as reliably as product usage.

Leading companies track support metrics by customer segment and renewal proximity. They prioritize responses to customers within 90 days of renewal and escalate repeat issues that might drive cancellation. They recognize that support interactions are retention moments, not cost centers to minimize.

Expansion activity predicts retention in B2B products. Customers who add users, upgrade plans, or adopt additional features churn at 50-70% lower rates than static customers. The causation runs both directions—satisfied customers expand, and expansion increases switching costs—but the correlation is consistent enough to use as a leading indicator.

Products should track expansion velocity by cohort and segment. A mid-market customer who hasn't expanded in 12 months faces higher churn risk than one who added seats last quarter. The absence of expansion signals either limited value realization or organizational constraints that might also drive cancellation.

When Benchmarks Become Dangerous

Benchmark obsession creates predictable pathologies that damage retention while appearing to improve it. These patterns emerge when teams optimize for the benchmark number rather than the underlying customer experience and business health.

Retention theater involves tactics that reduce reported churn without improving customer outcomes. Offering steep discounts to prevent cancellation lowers churn rates but creates customers who stay only for pricing, not value. They'll leave when discounts end or better deals appear. Aggressive save offers can actually increase long-term churn by training customers to threaten cancellation to get concessions.

Some companies make cancellation deliberately difficult—requiring phone calls, multiple confirmation steps, or waiting periods. These friction tactics reduce reported churn while degrading brand perception and customer lifetime value. A customer who stays because cancellation is annoying isn't a retained customer—they're a future detractor waiting for an easier exit.

Contract term manipulation improves annual churn metrics without addressing retention fundamentals. Pushing customers from monthly to annual contracts reduces reported churn because cancellations concentrate at contract end. But if customers don't renew those annual contracts, you've simply delayed churn recognition while reducing the feedback frequency that enables improvement.

Annual contracts work well when product value justifies the commitment. They become problematic when used primarily to improve retention metrics. The test is renewal rates—if annual contracts renew at 80%+ rates, they're working. If renewal rates are 60-70%, you're using contract terms to mask retention problems rather than solve them.

Segment cherry-picking distorts benchmarks when companies selectively include or exclude customer groups to improve reported metrics. Excluding "strategic" customers from churn calculations because they're "different" hides problems in your most important segment. Removing customers who churned due to business closure artificially improves rates while ignoring that serving failure-prone segments is a business model choice.

Honest benchmarking requires consistent definitions and complete populations. If you serve SMB customers, include their higher churn in your metrics rather than comparing only your enterprise subset to enterprise benchmarks. If you offer month-to-month contracts, report those results rather than highlighting only annual contract retention.

Building Your Own Baseline

The most valuable benchmark isn't industry average—it's your own historical performance segmented by the factors that drive retention in your business. This internal baseline reveals improvement trends, identifies high-performing segments worth expanding, and exposes problems before they become crises.

Start by segmenting your customer base along dimensions that should theoretically affect retention: contract value, customer size, implementation complexity, contract term, acquisition channel, or vertical market. Calculate retention metrics for each segment over the past 12-24 months. The goal isn't perfect precision—it's identifying patterns that suggest where you naturally perform well and where structural challenges exist.

Most companies discover that retention varies 2-3x across segments. A product might see 3% monthly churn in customers above $20,000 ACV but 7% monthly churn below $5,000 ACV. This variance isn't random—it reflects how well your product, pricing, and go-to-market motion fit different customer profiles. The insight guides both product strategy and growth investment.

Track cohort retention curves for each major segment. Plot monthly retention rates for customers by month since acquisition. Healthy segments show declining churn curves that flatten over time. Problematic segments show flat or rising curves. The curve shape reveals whether customers develop stickiness or remain perpetually at-risk.

Compare cohort curves across segments and over time. If your Q1 2024 cohort shows better month-six retention than your Q1 2023 cohort, something improved—product changes, onboarding updates, or customer profile shifts. If enterprise cohorts flatten by month three but SMB cohorts never flatten, you've identified a structural difference requiring different retention strategies.

Establish internal benchmarks for leading indicators: time-to-value achievement, usage patterns, expansion velocity, support satisfaction. Track these metrics by segment and monitor how they correlate with retention. You'll discover that certain patterns predict churn reliably in your business, even if they differ from published industry benchmarks.

A SaaS product might find that customers who complete onboarding within 14 days retain at 2% monthly churn while those taking 30+ days retain at 5% monthly churn. This internal benchmark—14 days to onboarding completion predicts 60% better retention—matters more than knowing that industry average onboarding takes 21 days. It gives you a clear target and quantifies the retention impact of hitting it.

Review your internal benchmarks quarterly. Retention patterns shift as products evolve, markets mature, and competition changes. What predicted retention last year might not predict it this year. Regular review ensures your benchmarks remain relevant and your improvement efforts focus on current drivers rather than historical patterns.

The Questions That Matter More

Rather than asking "Is our churn rate good?" relative to industry averages, leading retention teams ask different questions that drive actionable insight and focused improvement.

"Why do our best customers stay?" reveals the value propositions and experiences that create stickiness. Interview customers with the longest tenure, highest usage, and strongest expansion. Understand what they value, how they use your product, and what would make them leave. These insights guide product strategy and help you attract more customers who fit the profile of those who stay.

This is where systematic customer research creates advantages over anecdotal feedback. AI-powered interview platforms can conduct dozens of in-depth conversations with retained customers, identifying patterns in their experiences that predict loyalty. The insights often surprise teams—customers stay for reasons that differ from what marketing emphasizes or product teams assume.

"What causes customers to churn in their first 90 days versus after a year?" separates onboarding failures from value delivery problems. Early churn typically stems from poor product-market fit, implementation challenges, or unmet expectations. Late churn reflects competitive alternatives, changing needs, or value degradation. The causes differ, so the solutions must differ.

Track churn reasons by customer tenure. You might discover that 60% of churn in months 1-3 stems from implementation difficulties, while 70% of churn after month 12 stems from better alternatives emerging. This distribution tells you where to invest—better onboarding for early churn, product differentiation for late churn.

"How does retention vary by acquisition channel, customer segment, and contract structure?" identifies where your business model naturally succeeds and where structural challenges exist. You might find that customers acquired through partnerships retain 40% better than those from paid advertising, suggesting where to focus growth investment. Or that annual contracts retain only marginally better than monthly, indicating that contract term isn't masking fundamental value issues.

These variations guide strategic decisions that aggregate churn rates cannot inform. If enterprise customers retain at 95% annually while SMB customers retain at 70%, should you focus upmarket or invest in SMB retention improvements? The answer depends on unit economics, market size, and competitive dynamics—but you can't make the decision without understanding the retention variance.

"What early signals predict churn accurately enough to enable intervention?" builds the foundation for proactive retention programs. Identify the usage patterns, support interactions, or expansion behaviors that precede churn by weeks or months. Build systems to monitor these signals and trigger interventions before customers reach the cancellation decision.

The most effective retention programs don't react to cancellation requests—they prevent them by addressing disengagement early. A customer who hasn't logged in for two weeks receives proactive outreach. A customer whose usage dropped 50% gets a success check-in. A customer approaching renewal without recent expansion triggers a value review. These interventions work because they address problems before they calcify into cancellation decisions.

What Good Actually Looks Like

Good retention isn't a number—it's a relationship between your business model, customer segment, and the value you deliver. It's retention that enables profitable unit economics, supports your growth strategy, and improves over time as you learn what drives customers to stay.

For enterprise B2B products, good retention means annual churn below 10% with net revenue retention above 110%. It means cohort curves that flatten by month six, showing that customers develop stickiness. It means understanding exactly why your best customers stay and attracting more customers who fit that profile. It means having early warning systems that identify at-risk customers before they decide to leave.

For mid-market B2B products, good retention means annual churn below 15% with net revenue retention above 100%. It means time-to-value under 30 days and usage patterns that predict long-term retention. It means knowing which segments naturally retain well and which require extra support. It means having retention economics that support your customer acquisition costs and growth targets.

For SMB B2B products, good retention means annual churn below 30% with efficient customer acquisition that accounts for higher turnover. It means fast onboarding that demonstrates value before customers face their next budget review. It means pricing that aligns with customer budgets and value perception. It means accepting that some churn reflects customer business failure rather than product issues, and building a business model that works despite that reality.

For consumer subscriptions, good retention means understanding your engagement model and optimizing for realistic human behavior. It means annual churn below 30% for utility products, below 50% for entertainment, and below 70% for behavior change products. It means knowing why your best customers stay and making that experience accessible to more subscribers. It means having content, features, or value that remain relevant as customer needs evolve.

Across all segments, good retention means improving over time. Your churn rate this quarter should be better than last year, your cohort curves should show increasing stickiness, and your understanding of retention drivers should deepen continuously. Benchmark comparisons provide context, but internal improvement trends reveal whether you're building a business that gets better at keeping customers.

The companies that excel at retention don't obsess over whether their churn rate matches industry averages. They obsess over understanding why customers stay, why they leave, and what early signals predict each outcome. They build systems to monitor these signals, intervene proactively, and learn continuously from both retained and churned customers.

That understanding requires systematic customer research—not quarterly surveys with 15% response rates, but ongoing conversations that surface honest feedback at scale. It requires asking the right questions, listening carefully to answers, and acting on insights rather than just collecting them. It requires treating retention as a learning system, not a metric to hit.

The benchmark question—"What should our churn rate be?"—has an answer, but it's more complex than a single number. It depends on your business model, customer segment, and product category. It depends on your unit economics and growth strategy. Most importantly, it depends on whether you're improving over time and building the understanding required to retain customers better tomorrow than you do today. That capability matters more than any benchmark comparison.