Pricing power is arguably the single most important variable in commercial due diligence, yet it remains one of the most poorly validated. Management teams present historical price increases as evidence of pricing power. Investment memos cite gross margin expansion as proof. Confidential Information Memoranda include customer satisfaction scores and low churn rates as proxies. None of these actually measure pricing power — they measure what has happened under current competitive conditions, not what will happen when a new owner attempts to accelerate price increases or when a competitor enters the market with a lower-cost alternative.
Warren Buffett famously described pricing power as the most important factor in evaluating a business: “If you’ve got the power to raise prices without losing business to a competitor, you’ve got a very good business.” The operative phrase is “without losing business.” Historical price increases that coincided with market-wide inflation, contract escalators baked into multi-year agreements, or price hikes absorbed by customers who hadn’t yet evaluated alternatives do not constitute evidence of durable pricing power. They constitute evidence that pricing power hasn’t been tested.
For PE firms and strategic acquirers conducting commercial due diligence, the question isn’t whether the company has raised prices in the past. The question is whether customers would stay — and at what volume — if prices increased 15-30% over the next three to five years under a new ownership thesis. Answering that question requires going directly to customers and testing willingness-to-pay with methodological rigor.
Why Traditional Pricing Analysis Fails in Due Diligence
Most DD pricing analysis relies on three inputs: historical pricing data from the target, competitive pricing benchmarks from desk research, and anecdotal feedback from a handful of customer reference calls. Each input carries structural blind spots that, in combination, create a dangerously incomplete picture.
Historical pricing data tells you what the company charged, not what it could have charged. Many acquisition targets — particularly founder-led SaaS businesses — have systematically underpriced their products for years because they prioritized growth over monetization. The historical data shows stable pricing and high retention, which the CIM presents as “pricing stability.” In reality, the company has never tested its pricing ceiling, and the apparent stability reflects underpricing rather than customer insensitivity to price.
Competitive benchmarks from desk research suffer from a different problem: they capture list prices, not realized prices. Enterprise software pricing is notoriously opaque. Published pricing tiers represent opening positions, not transacted prices. Discounting practices vary wildly by deal size, competitive situation, and sales rep discretion. A desk research exercise that compares the target’s average selling price to competitors’ published pricing can be off by 30-50% from the actual competitive pricing environment.
Customer reference calls — typically 8-12 conversations arranged by the target company — provide the worst data of all. These are hand-selected advocates. The target chooses its happiest, stickiest, least price-sensitive customers for reference calls. Drawing pricing conclusions from this sample is like evaluating a restaurant’s food quality by interviewing only customers who left five-star reviews. The reference call format compounds the selection bias: a 30-minute call with a deal team member on the line isn’t the environment where a customer will volunteer that they’ve been evaluating cheaper alternatives.
Structured Willingness-to-Pay Testing for Deal Evaluation
Rigorous pricing power validation requires structured willingness-to-pay (WTP) testing across a representative sample of the target’s customer base — not just advocates, but mid-tier accounts, at-risk accounts, and recently churned customers. The methodology needs to capture both stated preferences (what customers say they’d pay) and revealed preferences (what their behavior and decision processes suggest about price sensitivity).
The Van Westendorp Price Sensitivity Meter remains a useful starting framework, adapted for the DD context. The classic four-question structure asks respondents to identify prices at which a product would be “too expensive to consider,” “expensive but still worth considering,” “a bargain,” and “too cheap to trust the quality.” In a due diligence setting, these questions should be framed relative to the target’s current pricing rather than in the abstract. You want to understand how far above the current price the “too expensive” threshold sits — that gap represents the pricing headroom available to a new owner.
However, Van Westendorp alone is insufficient for DD-grade analysis. The methodology captures stated price sensitivity but doesn’t account for the availability and attractiveness of alternatives. A customer might report that they’d consider the product “too expensive” at 40% above the current price, but if their next-best alternative costs 60% more, the actual switching threshold is far higher than the stated one. This is why pricing research in due diligence must integrate competitive alternative mapping alongside direct price sensitivity measurement.
Gabor-Granger stepwise price testing adds a complementary data point. This approach presents customers with specific price points in sequence, asking at each level whether they would “definitely buy,” “probably buy,” “probably not buy,” or “definitely not buy.” The resulting demand curve shows how purchase probability declines as price increases — and critically, where the inflection points are. A gradual, linear decline suggests that pricing power is real but finite. A sharp cliff at a specific price point suggests there’s a competitive or psychological threshold that constrains pricing above that level.
The most diagnostic approach combines these structured techniques with open-ended qualitative probing. After establishing a customer’s price sensitivity quantitatively, a skilled interviewer explores the reasoning behind those thresholds. What would they do if the price increased beyond their stated ceiling? Have they evaluated alternatives? What would trigger an active evaluation? What features or capabilities would justify a higher price? This qualitative layer is where the real DD value lives — it reveals the mechanisms behind price sensitivity, not just the levels.
Competitive Alternative Mapping Through Customer Voice
Pricing power doesn’t exist in isolation. It’s a function of the competitive landscape as perceived by customers — which is often dramatically different from the competitive landscape as described by the target’s management team. Management teams have a structural incentive to minimize the threat of alternatives in CIM presentations. They’ll acknowledge direct competitors but downplay adjacent solutions, in-house alternatives, and the option of simply not buying.
Customer research conducted at scale reveals the true competitive set by asking customers to describe their actual decision process. When their current contract comes up for renewal, who else do they consider? When they last evaluated the market, which vendors made their shortlist? If the product disappeared tomorrow, what would they do instead? These questions surface alternatives that management never mentions — the Excel spreadsheet that handles 60% of the use case, the adjacent platform that’s added competing functionality, the offshore services firm that offers a manual alternative at one-third the cost.
The structure of competitive alternatives directly constrains pricing power. When customers’ next-best alternative is 80% as good at 50% of the price, the target’s pricing ceiling is low regardless of current customer satisfaction. When the next-best alternative requires a painful six-month migration and retraining cycle, the switching costs create pricing headroom that persists even if the product itself isn’t differentiated. Mapping these dynamics across the customer base reveals not just the average competitive position but the distribution — which segments have strong lock-in and which are at risk of defection at modest price increases.
User Intuition’s AI-moderated interview platform enables this competitive mapping at a scale that transforms what’s possible during due diligence. Rather than relying on 10 hand-picked reference calls, deal teams can conduct 200-300 customer interviews within 48-72 hours, covering a representative cross-section of the target’s customer base. The AI moderator probes adaptively — when a customer mentions considering an alternative, the system follows up with questions about the evaluation criteria, the perceived gaps, and the price at which they’d switch. This produces a competitive intelligence dataset that traditional DD approaches simply cannot replicate within deal timelines.
Price Elasticity Indicators from Customer Interviews
Beyond direct WTP testing, customer interviews surface indirect indicators of price elasticity that experienced DD practitioners weigh heavily. These signals are often more predictive than stated price sensitivity because they reveal the structural position of the product within the customer’s operations and budget.
The first indicator is budget ownership. Products purchased from discretionary departmental budgets face fundamentally different pricing dynamics than products purchased from dedicated line items in enterprise budgets. When a customer says, “I pay for this out of my team’s tools budget,” that signals high price elasticity — the product competes for budget with every other tool the team uses, and a price increase forces a zero-sum trade-off. When a customer says, “This is a line item in our annual technology budget that goes through IT procurement,” the pricing dynamics are different. The budget is allocated, the procurement process has switching costs, and modest price increases flow through without triggering re-evaluation.
The second indicator is usage centrality. Products that sit in the critical path of a customer’s core workflow command pricing power that peripheral tools don’t. The diagnostic question isn’t “How often do you use this?” but “What happens if this stops working?” When the answer is “My team can’t do their jobs” or “We’d have to stop accepting orders,” the product has earned a position of operational dependency that supports aggressive pricing. When the answer is “We’d figure out a workaround” or “We’d go back to doing it manually,” the product is convenient but not essential — a fundamentally different pricing power profile.
The third indicator is decision-maker identity. Who decides whether to renew, and what do they optimize for? When end-users drive renewal decisions, they optimize for functionality and experience — price is one factor among many. When procurement professionals drive renewal decisions, they optimize for cost — and they have the training, incentives, and organizational mandate to push back on price increases. A customer base where procurement increasingly controls renewals is a customer base where pricing power is eroding, regardless of what current retention metrics show.
The fourth indicator is the language customers use to describe value. Customers with high willingness-to-pay describe products in terms of outcomes: “This saves us $2M a year” or “We couldn’t have launched that product line without this.” Customers with low WTP describe products in terms of features: “It does what we need” or “It’s fine for basic use cases.” Outcome-oriented framing signals an internalized value narrative that justifies premium pricing. Feature-oriented framing signals commodity perception.
Building a Pricing Power Evidence Base for Investment Committee
The output of a structured pricing power validation exercise should be a quantitative evidence base that investment committee members can evaluate alongside financial projections. This means translating qualitative customer insights into metrics that connect directly to the deal model.
The primary output is a pricing headroom estimate — the percentage price increase the customer base would absorb before triggering material churn. This isn’t a single number but a distribution: what percentage of revenue sits in segments with greater than 20% headroom, 10-20% headroom, and less than 10% headroom? This segmented view directly informs the revenue growth assumptions in the deal model. If 40% of revenue comes from segments with less than 10% pricing headroom, the GP’s plan to drive 15% annual price increases will destroy value rather than create it.
The secondary output is a competitive vulnerability map — which segments face credible alternatives that constrain pricing, and which are protected by switching costs or integration depth. This map should identify both current competitive threats and emerging ones that customer interviews surface before they appear in analyst reports.
The tertiary output is a price-value optimization framework — capabilities that customers cite as justifying premium pricing, and gaps that undermine willingness-to-pay. If customers consistently say they’d pay 25% more for a capability the product currently lacks, that’s a product development thesis that directly supports the pricing growth assumption.
Platforms like User Intuition that deliver commercial due diligence research at scale make it possible to build this evidence base within the compressed timelines of a typical deal process. The combination of AI-moderated interviews — which maintain qualitative depth while operating at quantitative scale — with structured WTP methodologies produces pricing power validation that stands up to investment committee scrutiny. The $20-per-interview economics mean that a comprehensive 250-customer pricing study costs $5,000 rather than the $75,000-$150,000 that traditional research firms would charge, fundamentally changing the cost-benefit calculus of rigorous pricing validation in DD.
Common Pitfalls in DD Pricing Analysis
Three systematic errors recur in due diligence pricing analysis, and each can be mitigated through properly designed customer research.
The first is confusing retention with pricing power. High net retention rates are the most commonly cited evidence of pricing power in CIMs, but they conflate several distinct dynamics. A company with 115% net revenue retention might have genuine pricing power — or it might have strong land-and-expand motion with flat pricing, or it might have contractual escalators that customers accept passively until their next procurement cycle. Disaggregating the components of net retention through customer interviews — asking specifically about price increase history, reactions to those increases, and anticipated response to future increases — separates authentic pricing power from structural artifacts.
The second pitfall is anchoring on current competitive dynamics. The competitive landscape at the time of the deal is not the competitive landscape that will exist during the hold period. Customer interviews should explicitly probe for emerging alternatives, planned evaluations, and changing procurement strategies. When 30% of customers mention that they’re “keeping an eye on” a specific emerging competitor, that’s a pricing power risk that won’t show up in the target’s churn data for another 12-18 months — but it needs to be reflected in the deal model today.
The third pitfall is treating the customer base as homogeneous. Average willingness-to-pay across the entire customer base is a meaningless statistic if the distribution is bimodal — 40% of customers with very high WTP and 60% with very low WTP. Segmented analysis is essential. The most useful segmentation dimensions for pricing analysis are company size (enterprise vs. mid-market vs. SMB), use case maturity (power users vs. basic users), and competitive exposure (customers with credible alternatives vs. customers with high switching costs). Each segment may have fundamentally different pricing power profiles, and the deal model needs to reflect that heterogeneity.
From Pricing Validation to Value Creation
The strongest DD pricing analyses don’t just validate current pricing power — they identify specific levers for pricing optimization under new ownership. Customer research surfaces these levers directly. When 70% of mid-market customers say they’d pay 20% more for an enterprise-grade SLA that the product doesn’t currently offer, that’s a packaging and pricing thesis. When customers in regulated industries describe compliance requirements that make switching prohibitively expensive, that’s a segment-specific pricing power insight that supports differentiated pricing tiers. Building this evidence base during due diligence — rather than waiting until post-close — gives the acquiring team a 100-day plan for pricing optimization that’s grounded in customer evidence rather than management assertions.