← Reference Deep-Dives Reference Deep-Dive · 11 min read

TAM Validation Through Customer Interviews: Beyond Top-Down Market Sizing in Due Diligence

By Kevin, Founder & CEO

Every Confidential Information Memorandum contains a TAM slide. It typically features a large number — $15B, $40B, $80B — sourced from Gartner, IDC, or a bespoke analysis by the sell-side advisor. The number is always large enough to make the target’s current revenue look like a tiny fraction of a massive opportunity. And it is almost always misleading.

The fundamental problem with top-down TAM analysis in due diligence is that it answers the wrong question. The relevant question for an acquirer isn’t “How big is the theoretical market for this category?” It’s “How many companies will actually buy this specific product, at this price point, given these competitive alternatives, within the hold period?” Those are radically different questions, and the gap between their answers is often 5-10x.

A 2024 analysis by Bain & Company examined TAM projections in 80 PE-backed software deals against actual market penetration achieved during the hold period. The median deal achieved only 22% of the TAM-implied revenue trajectory. The primary driver wasn’t execution failure — it was TAM inflation. The addressable market was smaller, the obtainable share was lower, and the average selling price was below what the TAM model assumed. These aren’t edge cases. They represent the systematic optimism embedded in how TAMs are constructed for deal marketing purposes.

For PE firms and strategic acquirers performing commercial due diligence, validating TAM through direct customer and market research is one of the highest-ROI activities available. A TAM that’s 3x too large doesn’t just produce an inaccurate market share estimate — it fundamentally distorts the growth assumptions in the deal model, the competitive positioning narrative, and the value creation plan.

The Anatomy of TAM Inflation


Top-down TAM estimates inflate the addressable market through four recurring mechanisms, each of which can be exposed and corrected through structured customer research.

The first mechanism is category conflation. Market research reports define categories broadly to maximize the addressable universe. A company selling AI-powered contract analysis software might cite the TAM for “enterprise legal technology,” which includes practice management, e-billing, document management, and a dozen other sub-categories that the product doesn’t serve. The reported TAM of $25B may include only $3B for the specific capability the company delivers. Customer interviews cut through this inflation by establishing exactly which problem the product solves, which budget it’s purchased from, and which alternative approaches it replaces — mapping the product to its actual competitive context rather than a broad analyst category.

The second mechanism is addressability assumptions. Even within the correct sub-category, top-down models assume that every company within the target firmographic profile is a potential buyer. In practice, large segments of the theoretical market are unreachable for structural reasons. Some companies are locked into long-term contracts with competitors. Some have built internal solutions that would cost more to replace than the product saves. Some operate in regulatory environments that preclude cloud-based solutions. Some simply lack the organizational maturity to adopt the product category at all. Customer and prospect interviews quantify these structural barriers by asking non-customers why they haven’t purchased and what would need to change for them to consider it.

The third mechanism is ASP inflation. TAM models typically use the target company’s current average selling price as the assumed revenue per account across the total market. But current ASP reflects the company’s existing customer mix — often weighted toward larger enterprises who pay premium prices. The incremental accounts needed to grow into the TAM are frequently smaller companies, less sophisticated buyers, or more price-sensitive segments that would transact at 40-60% of the current ASP. Customer interviews across different segments and company sizes reveal the actual price points at which different market tiers would purchase, producing a weighted ASP that is almost always lower than the headline figure in the CIM.

The fourth mechanism is competitive share assumptions. TAM-to-revenue bridges in CIMs imply achievable market shares of 15-25% — numbers that feel reasonable in the abstract but are rarely achievable in practice against entrenched competitors. Customer research maps the actual competitive dynamics: how many vendors buyers typically evaluate, what the common shortlist looks like, what factors drive vendor selection, and what the historical win rate is against each competitor. This ground-truth data frequently reveals that the target’s sustainable market share ceiling is 8-12% rather than the 20% implied by the investment thesis.

Bottom-Up TAM Construction Through Customer Research


The alternative to top-down TAM analysis is bottom-up construction — building the market size estimate from observed customer behavior rather than analyst assumptions. This approach requires more effort but produces dramatically more accurate results because it’s grounded in actual purchase decisions rather than theoretical addressability.

A bottom-up TAM study begins with the target’s existing customer base. Detailed interviews with 100-150 current customers establish the core use case profile: what problem does the product solve, what triggered the purchase, what alternatives were considered, and what value does it deliver? This analysis produces a precise definition of the “ideal customer profile” — not the aspirational ICP in the company’s marketing materials, but the empirically observed profile of companies that actually buy and retain the product.

The second phase extends research beyond current customers to three additional populations: churned customers, prospects who evaluated but didn’t purchase, and companies in the target firmographic profile that have never engaged with the product. Each population provides distinct TAM inputs. Churned customers reveal the boundaries of the product’s value proposition — the use cases and segments where it fails to deliver sufficient value to justify continued spending. Evaluated-but-lost prospects expose the competitive dynamics that constrain market share, including which competitors win and why. Unevaluated companies in the target profile identify the awareness and consideration barriers that limit the addressable market in practice.

This four-population research design produces a TAM estimate built on behavioral evidence. The total addressable market equals the number of companies matching the validated ICP, multiplied by the realistic ASP for each segment, multiplied by the achievable win rate against known competitors, adjusted for structural barriers identified through non-customer interviews. Each variable in this equation is calibrated against actual customer and market data rather than analyst assumptions.

User Intuition’s AI-moderated research platform makes this multi-population study design feasible within deal timelines. Traditional approaches would require separate research waves for each population — current customers, churned accounts, lost prospects, and non-engagers — with each wave taking 3-4 weeks. AI-moderated interviews can field all four populations simultaneously, delivering 250+ completed interviews within 48-72 hours. The platform’s access to a 4M+ participant panel means that even niche B2B segments can be reached without relying on the target company to provide contact lists, eliminating the selection bias that plagues traditional DD customer research.

Use Case Expansion Signals: Finding the Real Growth Story


While top-down TAM inflation is the most common problem in deal evaluation, there’s an equally important risk on the other side: missing genuine expansion opportunities that the target hasn’t yet pursued. Customer interviews are uniquely positioned to identify these opportunities because customers often see applications for a product that the company hasn’t imagined.

Use case expansion signals emerge when customers describe using the product in ways the company didn’t intend. A data integration platform sold for marketing analytics might have customers using it for financial reporting, supply chain visibility, or regulatory compliance. Each unintended use case represents a potential expansion vector — a new market segment that the product already serves but that isn’t reflected in the company’s positioning, pricing, or go-to-market strategy. These organic use cases are the most reliable indicators of expansion TAM because they represent validated demand rather than speculative market sizing.

The diagnostic questions for surfacing use case expansion are straightforward but rarely asked in traditional DD reference calls. “What other problems do you use this product to solve beyond its primary purpose?” “What workflows have you built around this product that the vendor probably doesn’t know about?” “If this product could do one more thing, what would it be?” These questions consistently surface expansion opportunities that management teams haven’t recognized — often because their product development roadmap is focused on the primary use case while customers have already extended the product into adjacent territory.

Adjacency mapping goes a step further by exploring the customer’s broader problem space around the core use case. When a customer uses a contract analysis tool for legal review, the adjacent problems include contract drafting, obligation tracking, risk assessment, and regulatory compliance monitoring. Each adjacency represents a potential expansion market — but only if customers validate that they’d purchase from the existing vendor rather than a specialist in that adjacent domain. Customer interviews test this willingness directly: “If [Company] offered obligation tracking in addition to contract analysis, would you consider it? What would you need to see? How much would you pay?”

This adjacency validation is critical because CIMs routinely include adjacent markets in their TAM calculations without testing whether the target has permission to enter those markets in customers’ minds. A company with deep credibility in contract analysis might have zero credibility in contract drafting — customers might view those as fundamentally different capabilities requiring different vendors. Only customer research can distinguish between adjacencies that represent genuine expansion TAM and adjacencies that exist only in the TAM slide.

Real Penetration Rate Discovery


Perhaps the most consequential output of bottom-up TAM research is the actual penetration rate — the percentage of the addressable market that has adopted the product category, not just the target company’s product. This metric is critical because it determines whether the growth opportunity is market creation (convincing non-buyers to adopt the category) or market capture (winning share from competitors within an already-adopted category). These are fundamentally different growth strategies with different economics, timelines, and risk profiles.

Top-down TAM models implicitly assume that penetration will increase steadily over the hold period, but they rarely validate this assumption against customer behavior. When non-customers are asked why they haven’t adopted the product category, their answers fall into three buckets that carry very different implications for the investment thesis.

The first bucket is awareness and consideration barriers: “We didn’t know solutions like this existed” or “We haven’t had time to evaluate options.” These are solvable problems — increased marketing spend and sales coverage can address them within a typical hold period. Non-customers in this bucket represent genuine near-term TAM that a well-resourced acquirer can activate.

The second bucket is structural barriers: “Our IT infrastructure can’t support this” or “Our regulatory environment prohibits cloud-based tools for this data.” These barriers are persistent and largely outside the target’s control. Non-customers in this bucket should be excluded from near-term TAM. When 40% of the theoretical addressable market faces structural adoption barriers, the functional TAM shrinks dramatically.

The third bucket is value barriers: “We looked at this category and the ROI didn’t justify the cost.” These responses are the most diagnostic for DD purposes because they reveal that the market has been tested and found wanting. Companies in this bucket have made a deliberate decision not to buy. Converting them requires either a step-function product improvement or a significant price reduction, neither of which is typically part of a PE value creation plan.

Quantifying the distribution across these three buckets — through 50-80 interviews with non-customers in the target firmographic profile — produces a penetration rate forecast grounded in behavioral evidence. If 60% of non-customers cite awareness barriers, penetration growth through sales and marketing investment is plausible. If 60% cite value barriers, the TAM is structurally smaller than the top-down model assumes.

Willingness-to-Pay Variation Across Segments


TAM models that use a single ASP across all segments systematically misrepresent the revenue opportunity. In practice, willingness-to-pay varies enormously across company sizes, industries, use case maturity levels, and geographic markets. Customer research quantifies this variation, producing a segmented revenue model that is far more predictive than the uniform ASP assumption.

The most common pattern in B2B software is a power law distribution: a small number of enterprise accounts pay premium prices that pull up the average, while mid-market and SMB accounts pay 30-70% less. When the TAM model applies the enterprise-inflated average ASP across all accounts, it overstates revenue potential from smaller segments — precisely the segments representing the largest growth opportunity by account count.

Customer interviews quantify this by testing willingness-to-pay within each tier. Enterprise buyers might justify $150K annual contracts for compliance features. Mid-market buyers might accept $40K for core functionality. SMB buyers might require self-service at $12K. The segmented TAM model — accounts per segment multiplied by segment-specific ASP and win rate — produces a fundamentally different total than the unsegmented CIM model.

Geographic variation adds another dimension. When the CIM includes international TAM at North American ASP levels, interviews with 30-40 potential buyers in a target geography quickly reveal whether the value proposition translates, what local competitors exist, and what price points the market will bear.

Integrating TAM Validation into the Deal Model


The output of a customer-research-based TAM validation should be a revised market model presenting three scenarios — base, upside, and downside — each calibrated against specific customer research findings.

The base case uses empirically validated TAM: confirmed ICP accounts multiplied by segment-specific ASPs multiplied by observed win rates, adjusted for structural barriers. The upside case adds validated expansion vectors: use case adjacencies that customers confirmed they’d purchase and penetration growth from non-customers who cited awareness rather than structural barriers. The downside case stress-tests against competitive risks identified through customer interviews — modeling churn impact from emerging competitors and using customer-validated price ceilings rather than management projections.

This three-scenario framework, built on commercial due diligence customer research, gives investment committees evidence-based bidding inputs. The economics of AI-moderated research make this rigor accessible for every deal. A comprehensive 250-interview TAM validation study through User Intuition costs approximately $5,000 and delivers results within 48-72 hours — a fraction of what traditional research firms charge, and trivial compared to the cost of overpaying for a business based on inflated TAM.

From TAM Validation to Value Creation Planning


The most sophisticated acquirers use DD-phase TAM research not just to validate the investment thesis but to build the post-close value creation plan. The same customer interviews that expose TAM inflation also reveal specific growth opportunities that a well-resourced owner can accelerate. When customer interviews identify a use case adjacency with strong demand signals, that’s a product development initiative with pre-validated demand. When non-customer interviews reveal that awareness barriers are the primary constraint on penetration growth, that’s a go-to-market investment thesis. When geographic willingness-to-pay testing confirms that the value proposition translates to new markets at viable price points, that’s an expansion roadmap.

The deal team that arrives at investment committee with a customer-validated TAM, a segmented growth model, and specific growth levers tied to customer evidence is making a fundamentally different case than the team presenting a Gartner-sourced TAM with a hockey-stick revenue projection. In competitive auction processes where multiple bidders are working from the same CIM data, the quality of TAM validation often determines who bids with confidence and who bids with hope.

Frequently Asked Questions

Top-down TAM calculations apply broad market size multiplied by assumed penetration rates without validating actual purchase behavior or willingness to pay across customer segments. Bottom-up validation through customer interviews reveals which use cases generate real revenue, what competitors are actually displacing the target, and what penetration rates are achievable in practice rather than in theory.
If customer interviews reveal that enterprise segments tolerate pricing 3x higher than SMB segments, the achievable revenue per customer in each segment differs substantially from the blended average assumed in a top-down model. This variation reshapes go-to-market prioritization and changes the revenue trajectory that justifies the acquisition multiple.
Real penetration rate discovery interviews customers in the target segments about their current solution stack, switching history, and evaluation criteria—revealing the actual share of situations where the target product competes and wins. CIM-assumed penetration rates are typically extrapolated from market research reports rather than validated against actual buyer behavior.
User Intuition can deploy AI-moderated interviews with 200+ customers or prospects in 48-72 hours, returning structured synthesis on use case adoption, pricing tolerance, and competitive dynamics before the end of a standard diligence window. At $20 per interview, validating a $2B TAM claim through 200 customer conversations costs less than a single analyst day.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours