← Insights & Guides · Updated · 28 min read

Customer Due Diligence Questions for PE: 50 Questions

By

Customer due diligence questions for private equity are the structured interview prompts used to validate investment theses by speaking directly with a target company’s customers — recruited independently, without the target’s involvement. The best questions share three traits: they are open-ended rather than closed, non-leading rather than confirmation-seeking, and designed for 5-7 levels of follow-up probing rather than checkbox completion. Reference calls with 3-5 hand-picked customers do not constitute customer diligence. Independent interviews with 50+ customers, using questions designed to test rather than confirm the thesis, produce the evidence that determines whether revenue projections will materialize.

For the complete PE customer research framework — from pre-LOI validation through portfolio monitoring and exit preparation — see the complete guide to customer research for private equity. This post focuses specifically on the 50 questions, how to use them, and the methodology that makes them produce reliable evidence rather than curated narratives.

Why Most Customer Due Diligence Questions Produce Unreliable Data?


The central challenge of customer due diligence is the gap between what customers say in a reference call and what they reveal in an independent, structured interview. Reference call satisfaction scores run 30-40% higher than independently recruited interviews for the same company. That is not margin of error. It is the distance between a curated narrative and reality.

Most deal teams compound this problem in three ways.

First, they ask leading questions that confirm the thesis rather than test it. “Are you satisfied with the product?” produces a “yes” from reference call customers that validates the retention assumption without testing it. “Walk me through the last time you seriously considered an alternative” reveals whether that satisfaction translates to actual loyalty or merely reflects the absence of a trigger event.

Second, they accept surface-level responses without probing. A customer who says “we love the product” might have a support ticket escalation in progress, a renewal negotiation planned for next quarter, and an internal champion who just left the company. None of those risks surface in a 15-minute reference call. All of them surface in a 30-minute interview with 5 levels of probing depth.

Third, they interview too few customers from a biased sample. Three to five reference calls from a customer base of 2,000 is a 0.25% sample, pre-filtered for positivity. No financial model would survive scrutiny built on 0.25% of the data, selected by the party with the strongest incentive to present favorable results. Yet deal teams routinely make nine-figure capital allocation decisions on exactly this basis.

A study of PE-backed companies found that 40% of deals that underperformed on revenue within the first two years had customer risks that were identifiable at the time of investment — but were not surfaced during diligence. The questions existed. The methodology to ask them properly did not.

None of these risks are discoverable with a checklist. All of them are discoverable — if you ask the right questions and probe deep enough.

How Do You Use These Questions?


Select 8-12 Primary Questions Per Interview

The 50 questions below cover six research phases mapped to common PE thesis types. No single interview can explore all 50 at meaningful depth. Select 8-12 primary questions based on the specific thesis you are testing — loyalty-driven retention, competitive moat, pricing power, churn risk, or growth ceiling. The remaining questions serve as reference for follow-up probes or subsequent interview waves.

For the complete commercial due diligence template that covers question selection, interview cadence, and synthesis frameworks, the program design guide provides the operational detail.

Spend 60% of Interview Time on Follow-Up Probes

The primary question opens the door. The follow-up probes walk through it. In a 35-minute interview, plan to spend 21 minutes probing and 14 minutes on primary questions. This means you will fully explore 4-6 questions rather than superficially covering 12. Four questions explored to five levels of depth produce more actionable thesis evidence than twelve questions with no follow-up.

Reference calls fail here structurally. A 15-minute call with a coached customer has no room for probing. The first answer is the only answer. That is why reference calls confirm theses rather than test them.

Sequencing Matters

Start broad — relationship context, general experience, initial impressions. Move to specific — competitive evaluation, pricing sensitivity, switching calculus. Close with reflection — what would change the relationship, where the customer sees the vendor heading. This sequence prevents priming. A customer who has just answered three questions about pricing will unconsciously frame all subsequent answers through a value lens.

Never Lead

Leading questions are the most common and most destructive interview failure in PE customer diligence. “Is the product mission-critical?” plants criticality in the conversation. “Walk me through what would happen if this product disappeared tomorrow” lets the customer determine the actual importance. “Would you stay through a price increase?” implies the deal team is planning one. “How do you think about the value you get relative to what you pay?” lets the customer define the value-price relationship without tipping the thesis.

Category 1: Relationship Context and Opening (7 Questions)


These questions establish the customer’s frame of reference before introducing any thesis-specific topics. Context questions reveal how customers mentally organize their relationship with the target, which problems they consider primary, and how their needs have evolved. The intelligence value is in the narrative arc — customers whose relationship has deepened over time tell a fundamentally different retention story from those whose engagement has plateaued. Always start here to build rapport and get the customer talking in narrative mode before narrowing to specific thesis dimensions.

1. “Walk me through your relationship with [Target Company] from when you first became a customer to today. How has it evolved?”

Timeline reconstruction reveals the arc of the customer relationship. Listen for inflection points — moments where satisfaction shifted, engagement changed, or the customer’s needs evolved faster than the product. The distance between “it has been consistently great” and “there was a rough patch in year two” is the distance between a defensible retention asset and a relationship maintained by inertia.

Laddering prompt: “You mentioned things changed around [time period]. What specifically caused that shift?”

2. “What problem or need originally brought you to [Target Company]? Is that still the primary reason you use them?”

Origin-versus-current alignment reveals whether the product-market fit has expanded, narrowed, or shifted. A customer who originally bought for reason A but now stays for reason B has a different loyalty profile than one whose original need remains the primary driver. Misalignment between original purchase motivation and current retention driver is an early warning of competitive vulnerability.

Laddering prompt: “If the original reason is no longer primary, what replaced it? When did that shift happen?”

3. “How would you describe [Target Company] to a peer who has never heard of them?”

Peer description language is more diagnostic than any satisfaction score. “They are essential to how we operate” is a different customer than “they are fine, we have not had major problems.” The enthusiasm gap maps directly to organic growth potential and price increase tolerance. Listen for whether the customer describes the product, the relationship, or the outcome.

Laddering prompt: “You described them as [their words]. Is that how you would have described them a year ago, or has your perception changed?”

4. “Who else in your organization interacts with [Target Company]? How would they describe the relationship differently than you?”

Multi-stakeholder perspective reveals whether satisfaction is concentrated in one champion or distributed across the organization. A product beloved by the primary buyer but resented by end users has a different retention profile than one with broad organizational support. Champion risk — the possibility that the one advocate leaves — is invisible in single-stakeholder interviews.

Laddering prompt: “If [other stakeholder] were in this conversation instead of you, what would they say that you would not?”

5. “What does a typical interaction with [Target Company] look like? How frequently do you engage with them?”

Engagement frequency and quality reveal the depth of the customer relationship. Customers who interact weekly through product usage are differently retained than those who only interact at renewal. Listen for whether engagement is driven by the customer’s need or by the vendor’s outreach.

Laddering prompt: “Is that level of engagement enough for you? Would you want more or less?”

6. “Has anything surprised you — positively or negatively — about being a customer of [Target Company]?”

Surprise questions bypass rehearsed satisfaction narratives and surface genuine emotional responses. Positive surprises reveal where the product exceeds expectations. Negative surprises reveal where it falls short in ways the customer did not anticipate. Both are more diagnostic than balanced, measured feedback.

Laddering prompt: “How did that surprise affect your overall view of the relationship?”

7. “If you had to summarize your experience as a customer in one sentence, what would it be?”

The forced-summary question produces the customer’s distilled verdict. Across 50 interviews, the distribution of one-sentence summaries provides the most efficient thesis-testing data available. Cluster the responses and you see exactly where the customer base stands — enthusiastic, satisfied, indifferent, or at risk.

Laddering prompt: “Is that sentence different from what you would have said a year ago?”

Category 2: Loyalty and Retention (8 Questions)


When the investment thesis depends on customer retention, strong NPS, or sticky relationships as a core value driver, these questions probe whether retention is driven by genuine loyalty — which is defensible — or by switching costs and inertia — which is vulnerable to any competitor willing to invest in migration support. The distinction determines hold-period projections and exit multiples. This is the most common PE thesis pattern, and the one where reference calls fail most dramatically.

8. “If [Target Company] disappeared tomorrow, what would you do first? How quickly could you replace what they provide?”

This hypothetical stress test separates emotional loyalty from structural dependency. Customers who describe a painful, time-consuming replacement process are retained by switching costs. Customers who describe easy alternatives but choose to stay reveal genuine preference. The distinction matters enormously — switching cost retention is vulnerable to any competitor willing to subsidize migration.

Laddering prompt: “You said it would take [time] to replace them. What specifically makes it that difficult?”

9. “When was the last time you seriously considered switching to an alternative? What triggered that, and what ultimately kept you?”

Switching consideration frequency and recency are leading indicators of retention risk that no NPS score captures. A customer who considered switching six months ago and was retained by a last-minute discount is fundamentally different from one who has never evaluated alternatives. Listen for the retention mechanism — was it product value, switching cost, or a save offer?

Laddering prompt: “If that same trigger happened again tomorrow, would the outcome be different this time?”

Laddering example:

Primary question: “When was the last time you seriously considered switching? What kept you?”

Customer says: “About a year ago. We looked at some alternatives but decided to stay.”

Follow-up: “What triggered that evaluation?”

Customer says: “Our account manager left and the replacement was not as responsive.”

Follow-up: “And what ultimately kept you — was it the product, the relationship, or something else?”

Customer says: “Honestly, the migration would have been too painful. We have three years of data in their system.”

Follow-up: “So if migration were easy — say a competitor handled the entire data transfer — would you have switched?”

Customer says: “Probably, yes. At that point we were frustrated enough that an easy switch would have tipped it.”

Follow-up: “And today — is that frustration resolved, or are you still in the same position?”

Customer says: “It is better with the new account manager, but I would say we are in the ‘satisfied enough not to go through the pain of switching’ category, not the ‘genuinely loyal’ category.”

This sequence revealed that what management would report as a retained customer is actually a customer held by migration friction, not by product value. That distinction is the difference between a defensible retention asset and a customer base vulnerable to any competitor that solves the migration problem.

10. “How would you describe your level of satisfaction — not on a scale, but in your own words?”

Open-ended satisfaction captures nuance that numeric scales compress. Customers who say “fine” and customers who say “essential to our operations” both might rate 7/10, but their retention profiles are entirely different. The language itself is the data.

Laddering prompt: “What would need to change for you to describe it differently — either more positively or more negatively?”

11. “What would [Target Company] have to do to lose you as a customer?”

This inverts the typical satisfaction question. Instead of asking what they like, you ask what would break the relationship. The specificity of the answer indicates how much the customer has thought about leaving — and how close to the edge they already are. Vague answers (“something really bad”) suggest low churn risk. Specific answers (“another price increase” or “if they do not fix the reporting module”) indicate active frustration.

Laddering prompt: “How likely do you think that scenario is?”

12. “Has anyone on your team ever advocated for switching away? What was their argument?”

Internal advocacy for switching reveals organizational-level retention risk that is invisible from the outside. Even if the primary contact is satisfied, a CFO questioning ROI or an end user frustrated with the interface represents a dormant threat that activates at renewal.

Laddering prompt: “How was that advocacy resolved? Did it change how the company is perceived internally?”

13. “If they raised prices 15% at your next renewal, what would you do?”

Price increase tolerance is a direct proxy for perceived value and competitive alternatives. Aggregate this across 50+ interviews and you get a pricing power curve that directly informs the value creation plan. The distribution matters more than the average — 60% who would absorb and 40% who would evaluate is a very different risk profile than 90% who would absorb and 10% who would switch.

Laddering prompt: “At what percentage increase would your response change from [their answer] to actively looking for alternatives?”

14. “Compared to when you first became a customer, are you more loyal, less loyal, or about the same? What changed?”

Loyalty trajectory matters more than current state. A customer whose loyalty is increasing represents compounding value. A customer whose loyalty is declining, even if currently high, represents a deteriorating asset. The trajectory across 50 interviews produces a retention trend that backward-looking churn data cannot.

Laddering prompt: “What would reverse that trajectory?”

15. “What is the one thing that keeps you with [Target Company] that no competitor currently matches?”

This identifies the actual defensible advantage from the customer’s perspective — which is often different from what management believes their moat is. When multiple customers name the same thing, you have identified the real retention driver. When answers are scattered or vague, the moat is thinner than the thesis assumes.

Laddering prompt: “How important is that one thing relative to everything else they provide?”

Category 3: Competitive Moat (8 Questions)


When the thesis depends on defensible competitive positioning — product differentiation, network effects, data advantages, or ecosystem lock-in — these questions test the moat from the customer’s perspective, which is the only perspective that matters. Management can claim product superiority. Customers reveal whether that superiority is perceived, valued, and sufficient to prevent switching. For the complete commercial due diligence framework, competitive moat validation is one of five standard thesis dimensions.

16. “What other companies or solutions did you evaluate before choosing [Target Company], or have evaluated since?”

The competitive consideration set from the customer’s perspective often differs from what management identifies as competitors. New entrants, adjacent products, and DIY solutions that the target does not track may represent the actual competitive threat. When 30% of customers mention a competitor that management has not identified, that is a material intelligence gap.

Laddering prompt: “Which of those evaluations was the most serious? How close did you come to choosing them?”

17. “What does [Target Company] do better than anyone else in the market?”

Consensus on a specific advantage indicates a real moat. Scattered, vague, or absent answers indicate perceived parity — a dangerous position regardless of what the management deck claims. This question also reveals the customer’s comparison framework.

Laddering prompt: “How important is that specific advantage to your decision to stay?”

18. “Where do you think competitors are catching up or pulling ahead?”

Customers who use multiple products or who evaluate alternatives regularly have visibility into competitive dynamics that management often lacks. They see feature parity arriving, service quality shifting, and positioning changes in real time. This is forward-looking competitive intelligence that market reports lag by 6-12 months.

Laddering prompt: “Does that competitive catching-up change how you think about your own renewal?”

19. “If a competitor offered you the same product at 20% less, would you switch? What about 40% less?”

Price-based switching thresholds quantify the premium the moat commands. A moat that survives a 20% discount has real depth. A moat that collapses at 10% is price competitiveness, not differentiation. Across 50 interviews, this produces a switching-price curve that directly quantifies the competitive moat’s financial value.

Laddering prompt: “What about if they offered more features at the same price? What is harder to replicate — the price advantage or the product advantage?”

Laddering example:

Primary question: “If a competitor offered you the same product at 20% less, would you switch?”

Customer says: “At 20%? Probably not. The switching cost is too high.”

Follow-up: “When you say switching cost — do you mean financial cost, operational disruption, or something else?”

Customer says: “Operational disruption mainly. We have customized workflows that would need to be rebuilt.”

Follow-up: “If the competitor offered to handle the entire migration — rebuild your workflows, transfer your data, zero disruption — would the 20% matter then?”

Customer says: “That changes things. If migration were painless, yes, I would seriously consider it.”

Follow-up: “So the moat is really the migration complexity, not the product differentiation?”

Customer says: “I suppose that is right. The product is good, but it is not 20% better than what I have seen from competitors. The lock-in is operational, not preferential.”

Follow-up: “Has any competitor ever offered to handle the migration?”

Customer says: “Not yet. But when they do, I think a lot of customers will move.”

This sequence revealed that the perceived competitive moat — product quality — is actually operational lock-in. That is a fundamentally different asset. Operational lock-in depreciates the moment a competitor invests in migration tools. Product preference compounds over time. The thesis assumption about “competitive moat” needs to specify which kind of moat, because the durability is entirely different.

20. “Is there anything [Target Company] provides that you literally could not get from anyone else?”

True differentiation versus perceived differentiation. When customers identify something genuinely unique, it validates the moat thesis. When they struggle to name anything, the competitive advantage is narrower than the investment model assumes.

Laddering prompt: “How long do you think that uniqueness will last before competitors replicate it?”

21. “How integrated is [Target Company] into your daily workflow? Could you extract it easily?”

Integration depth is a measurable dimension of competitive moat. Products embedded in daily workflows across multiple user types create structural switching costs that survive competitive pricing pressure. The deeper the integration, the more defensible the retention — but also the more dependent the thesis is on a switching-cost moat rather than a preference moat.

Laddering prompt: “If you were starting from scratch today, would you still choose [Target Company]? Why or why not?”

22. “How would you rate [Target Company] on innovation — are they staying ahead, keeping pace, or falling behind?”

Innovation velocity from the customer’s perspective reveals whether the moat is widening or narrowing. Customers describing a company that was once innovative but has slowed are describing a depreciating competitive asset. This trajectory directly affects hold-period assumptions about market position.

Laddering prompt: “Can you point to a specific example of innovation — or lack of it — in the last year?”

23. “Have you noticed any new entrants or alternatives in the last 12 months that got your attention?”

Emerging competitive threats are often visible to customers before they appear in market reports or management presentations. A cluster of customers mentioning the same new entrant is an early warning signal. Listen for whether the attention is curiosity, evaluation, or active consideration.

Laddering prompt: “What specifically about [new entrant] got your attention? Have you had a conversation with them?”

Category 4: Pricing Power (7 Questions)


When the value creation plan assumes post-acquisition price increases — one of the most common PE levers — these questions test whether the customer base will absorb increases or revolt. Pricing power is a thesis about customer perception, not about the product. A customer who perceives strong value surplus will absorb a 15% increase without blinking. A customer already questioning ROI at current prices will churn. Understanding the cost dynamics of commercial due diligence itself is one thing. Understanding the target’s pricing power requires direct customer evidence.

24. “How do you think about the value you get relative to what you pay for [Target Company]?”

Open-ended value-to-price perception. Customers who describe strong value surplus can absorb increases. Customers who describe paying “about what it is worth” or “a bit more than I would like” are at the ceiling. The language itself calibrates where the customer sits on the pricing power spectrum.

Laddering prompt: “Can you quantify that value — even roughly? What would it cost you to solve this problem without them?”

25. “When was the last time the price changed, and how did your team react internally?”

Historical price increase response is the best predictor of future price increase response. Reactions ranging from “we did not notice” to “we had a serious internal conversation about switching” indicate where the customer sits on the tolerance spectrum. Aggregate this across 50 interviews to model the expected churn from a planned increase.

Laddering prompt: “If the same increase happened again, would the reaction be the same or different?”

26. “If pricing went up at your next renewal, who in your organization would be involved in evaluating whether to continue?”

Decision-chain mapping for price-sensitive renewals. When the primary user can absorb increases without escalation, pricing power is strong. When increases trigger procurement involvement or executive review, the dynamics shift toward competitive pressure and benchmark comparison.

Laddering prompt: “What is the threshold where it escalates from your decision to a committee decision?”

27. “Do you benchmark [Target Company]‘s pricing against alternatives? How often?”

Active benchmarking frequency indicates price sensitivity. Customers who never benchmark are loyal or locked in. Customers who benchmark at every renewal are one competitive offer away from churning. The distribution across the customer base reveals how exposed the thesis is to competitive pricing moves.

Laddering prompt: “What would you do if a benchmark showed you were paying significantly above market?”

28. “Are there features or tiers you are paying for that you do not use? How does that affect your perception of the price?”

Unused features erode perceived value. Customers paying for capabilities they do not use feel overcharged even at objectively reasonable price points. This perception is a ceiling on pricing power that feature utilization data alone cannot capture.

Laddering prompt: “If they offered a stripped-down version at 40% less, would that be more attractive?”

29. “How does the cost of [Target Company] compare to the cost of the problem it solves?”

Value anchoring against the problem cost. When the product cost is small relative to the problem cost, pricing power is strong. When they are comparable, the customer is already weighing cost against value at every renewal. This ratio across 50 interviews produces the most reliable pricing power indicator available.

Laddering prompt: “If the product cost grew to be a larger share of the problem cost, at what point would it stop being worthwhile?”

30. “If you had to justify the cost of [Target Company] to a new CFO who had never used it, what would you say?”

The justification narrative reveals the strength of the internal business case. Customers who articulate clear, quantified ROI provide a strong foundation for pricing power. Customers who struggle to justify the cost are exactly the accounts that churn after price increases. The ease and specificity of the justification is the data.

Laddering prompt: “Have you actually had that conversation? How did it go?”

Category 5: Churn Risk (8 Questions)


When the financial model assumes a specific retention rate, these questions validate whether that rate is sustainable, improving, or at risk. Churn risk diligence is particularly critical for recurring revenue businesses where the terminal value depends on long-term retention. What the SaaS commercial due diligence framework calls “net revenue retention” is ultimately a measurement of answers to these questions, aggregated across the entire customer base.

31. “What is the biggest frustration you have with [Target Company] right now?”

Current frustrations are leading indicators of future churn. When the same frustration surfaces across 15-20% of independently recruited customers, it represents a systemic retention risk that management may or may not be addressing. Reference calls will never surface this because management selects references who are not frustrated.

Laddering prompt: “How long have you had that frustration? Have you raised it with the company?”

32. “Has there been a moment in the last year where you thought about leaving? What happened?”

Recent consideration events are the most predictive churn signal available. The specificity of the response — a vague “I think about it sometimes” versus a detailed “in September when they botched our migration” — indicates the severity. Frequency matters too — annual consideration is different from monthly.

Laddering prompt: “What prevented you from acting on that? Is the prevention still in place?”

33. “How would you describe the quality of support you receive? Has it changed over time?”

Support quality degradation is a common post-acquisition risk, and customers notice the trajectory before it shows up in ticket metrics. Declining support satisfaction during the diligence period is a red flag for what might happen when cost optimization begins post-close.

Laddering prompt: “Can you give me a specific example of support that was either particularly good or particularly poor?”

34. “Are there any promises made during your initial sales process that were never fully delivered?”

Unresolved expectation gaps accumulate into churn pressure. Each unfulfilled promise is a dormant risk that can activate when a competitor offers what was originally promised. The age and severity of unresolved promises across the customer base is a retention risk indicator that no survey captures.

Laddering prompt: “Has that undelivered promise affected how you view new commitments from the company?”

35. “If a colleague asked you to rate [Target Company] honestly, in private, what would you say?”

The private recommendation question captures candor that public-facing NPS scores miss. The gap between what customers tell the vendor and what they tell peers is a measure of social desirability bias in the target’s satisfaction data. This gap, across 50 interviews, recalibrates the reported NPS or CSAT to its actual level.

Laddering prompt: “Is there a gap between what you would say privately and what you have said in their surveys? Why?”

Laddering example:

Primary question: “If a colleague asked you to rate them honestly, in private, what would you say?”

Customer says: “I would say they are good. Not great, but good.”

Follow-up: “What separates ‘good’ from ‘great’ for you?”

Customer says: “Great would be proactive. They solve problems I did not know I had. Good means they do what I expect, but I have to push for anything beyond that.”

Follow-up: “And what would you have said a year ago?”

Customer says: “A year ago I would have said great, actually. They were more responsive, more proactive.”

Follow-up: “What changed?”

Customer says: “They grew fast. I think our account manager went from managing 10 accounts to managing 40. The quality of attention dropped.”

Follow-up: “Does that make you worried about the future trajectory?”

Customer says: “Yes. If they keep growing at this rate without adding support, I think a lot of customers will start looking around. I already know two others who are.”

This sequence moved from “good” — which would score 7/10 in a survey — to a specific, systemic risk: support degradation from rapid growth that is already causing peer conversations about alternatives. That is a retention risk that would change the hold-period churn assumption in any financial model.

36. “What is the biggest risk you see in continuing to rely on [Target Company]?”

Customer-identified risks — platform stability, company viability, product stagnation, key personnel dependency — are forward-looking indicators that financial statements do not capture. When customers name the same risk, the deal team needs to validate whether that risk is real and whether it is priced into the model.

Laddering prompt: “How would you mitigate that risk if it materialized?”

37. “Have you reduced the number of seats, licenses, or the tier of service in the last renewal cycle?”

Contraction is partial churn. Customers who downgrade before canceling are demonstrating the exact trajectory that leads to full departure. Contraction rates are often more diagnostic than gross churn rates because they reveal the direction of travel before it reaches the terminal point.

Laddering prompt: “What drove that reduction? Cost, usage, or something else?”

38. “What would have to change for you to increase your investment with [Target Company]?”

This flips from risk to opportunity. The conditions for expansion reveal what the company would need to do to move from retention to growth — and whether those conditions are realistic within the hold period. When conditions are simple and achievable, the growth thesis has customer-side validation. When conditions are complex or unlikely, expansion assumptions should be discounted.

Laddering prompt: “How likely is it that they make those changes in the next 12-18 months?”

Category 6: Growth Ceiling (12 Questions)


When the value creation plan depends on revenue growth — new customers, expansion within existing accounts, new products, or new markets — these questions test whether the addressable opportunity is as large as the model assumes from the perspective of the people who would need to buy more. Growth ceiling validation requires the broadest question set because growth has the most thesis variants. The AI-powered commercial due diligence methodology makes it feasible to test all growth dimensions simultaneously across 50+ interviews.

39. “Are there problems in your workflow that [Target Company] is not solving today but could?”

Adjacent needs from existing customers define the organic expansion opportunity. Consistent themes across 50+ interviews reveal product extension opportunities that management may or may not have identified. The gap between what customers need and what the product provides is the white-space map.

Laddering prompt: “How much would you pay for that capability if they offered it?”

40. “If [Target Company] launched a new product or service, what would it need to be for you to buy it?”

New product appetite and willingness to extend the vendor relationship. Customers who would consider additional products validate cross-sell potential. Customers who prefer best-of-breed from different vendors constrain it. The distribution across the customer base calibrates the cross-sell assumption in the growth model.

Laddering prompt: “What would you need to see before buying — case studies, a pilot, or just the product itself?”

41. “Are there other teams or departments in your organization that could benefit from [Target Company]?”

Land-and-expand potential within existing accounts. The specificity of the answer — naming actual departments versus a vague “maybe” — indicates how developed the expansion opportunity actually is. Listen for whether the customer has already advocated internally or whether the idea is new to them.

Laddering prompt: “Have you ever recommended them internally? What was the response?”

42. “What is preventing you from using [Target Company] more than you currently do?”

Growth blockers from the customer’s perspective. Common answers — pricing structure, missing features, integration gaps, internal adoption barriers — each represent a specific constraint on the growth thesis. The frequency of each blocker across 50 interviews prioritizes the product investment roadmap.

Laddering prompt: “If that blocker were removed, how much more would you use the product?”

43. “How does your company’s budget for this category compare to last year? Is it growing, flat, or shrinking?”

Customer-side budget trajectory is a demand-side growth indicator that bottom-up TAM models miss. If customers are reducing category spend, the growth ceiling is lower than the market sizing suggests regardless of competitive dynamics. Budget expansion validates the secular growth thesis.

Laddering prompt: “What is driving that budget change? Is it company-specific or industry-wide?”

44. “Do you see your need for what [Target Company] provides increasing or decreasing over the next 2-3 years?”

Forward-looking demand perception. Customers whose needs are growing validate the growth thesis. Customers whose needs are plateauing or declining indicate market maturation that compresses growth multiples.

Laddering prompt: “What would accelerate that increase? Or what might cause it to plateau?”

45. “Is [Target Company] the kind of product you would recommend to someone in a completely different industry? Why or why not?”

Market expansion potential through the customer lens. When customers see broad applicability, horizontal expansion is plausible. When they see the product as narrowly suited to their specific context, new-market assumptions should be discounted.

Laddering prompt: “Have you ever actually recommended them to someone in a different industry?”

46. “What is missing from [Target Company]‘s offering that you currently solve with a different tool or manual process?”

Whitespace identification. Every manual workaround and supplementary tool represents an unmet need that the target could potentially capture. The frequency and willingness-to-pay for these unmet needs define the organic expansion runway.

Laddering prompt: “How much do you spend on that workaround — in time, money, or both?”

47. “If you had unlimited budget, how much more would you invest in the capabilities [Target Company] provides?”

Budget elasticity of demand for the target’s category. High elasticity signals that the growth ceiling is constrained by customer budgets, not by market opportunity — a different growth dynamic than product-constrained or competition-constrained ceilings. Low elasticity signals category saturation.

Laddering prompt: “What specifically would that additional investment fund?”

48. “Where do you see the biggest gap between what [Target Company] delivers today and what you will need in two years?”

The future-need gap identifies whether the target’s product trajectory aligns with where customers are headed. Convergence validates the growth thesis. Divergence suggests the target will need significant product investment — which affects both growth rates and capital allocation during the hold period.

Laddering prompt: “Have they communicated a roadmap that addresses that gap?”

49. “What would a competitor need to offer to make you seriously consider switching — not just pricing, but in terms of capability?”

The minimum viable competitive threat. If the bar is low — “better pricing and decent support” — the moat is shallow and growth assumptions are at risk. If the bar is high — “they would need to replicate our entire integration and five years of historical data” — switching costs create growth runway protection.

Laddering prompt: “Do you think any current competitor is close to clearing that bar?”

50. “Looking at your industry overall, is demand for what [Target Company] provides a growing wave or a maturing market?”

Industry-level demand trajectory from the buyer’s perspective. This complements the top-down TAM analysis with bottom-up demand-side evidence. When customers across industries agree that demand is growing, the growth thesis has demand-side validation. When they describe maturation, the growth model needs to explain where incremental demand will come from.

Laddering prompt: “What would change that trajectory in either direction?”

Moderator Mistakes That Undermine Customer Due Diligence Interviews


Even with the right questions and independently recruited customers, interview execution determines data quality. These are the seven most common mistakes that reduce due diligence interviews from strategic evidence to wasted time and capital.

Accepting the first response without probing. In customer due diligence, the initial answer is a polished, socially acceptable narrative. Surface responses match the actual customer dynamic only 40-60% of the time. Every primary question requires at least 3-4 follow-up probes. Accepting “we are satisfied” without probing is the due diligence equivalent of accepting management projections without testing assumptions.

Asking leading questions that confirm the thesis. “Is the product mission-critical?” plants criticality. “Would you say customer support is strong?” plants the answer. Every leading question converts a data-gathering exercise into a confirmation exercise. The deal team ends up validating what they already believed, which makes the customer diligence worthless as an independent check.

Having the deal team conduct the interviews. When the PE firm’s own associates conduct customer interviews, social dynamics contaminate the data. Customers sense the evaluative context, moderate their criticism, and amplify their praise. Independent moderation — whether human or AI — eliminates the dynamic where customers perform for the buyer rather than report their genuine experience.

Conducting interviews too late. Customer diligence that arrives after the go/no-go decision is retrospective justification, not evidence. The most common complaint from deal teams is that customer insights were valuable but arrived too late to influence the decision. Within competitive deal timelines, traditional 4-8 week research cycles are structurally incompatible with the decision cadence.

Treating the guide as a survey. Rushing through 20 questions at surface level produces 20 data points of minimal intelligence value. A moderator who explores 6 questions at genuine depth produces evidence that changes deal committee recommendations. The value per minute of interview time comes from depth, not breadth.

Asking closed-ended questions. Yes/no questions produce yes/no data. “Are you happy with the product?” produces a “yes” that tells the deal team nothing about relative enthusiasm, competitive vulnerability, or expansion potential. Open-ended questions produce the nuance that separates thesis-confirming evidence from thesis-challenging evidence.

Failing to ground in specific events. “How do you generally feel about [Target Company]?” produces opinions. “Walk me through the last time you interacted with support” or “describe your most recent renewal negotiation” produces evidence. General sentiment questions invite idealized self-reporting. Specific event questions force customers to reconstruct actual experiences with concrete details that either support or contradict the thesis.

How AI Moderation Changes Customer Due Diligence


Traditional customer diligence through consulting firms or expert networks takes 4-8 weeks for 15-20 interviews at $75,000-$200,000. In competitive deal processes, that timeline is incompatible with the decision cadence. Customer evidence arrives after the go/no-go decision, rendering it retrospective rather than influential. The question is not whether customer diligence is valuable — every deal team that has been surprised by post-close customer dynamics will tell you it is. The question is whether it can be done fast enough and cheaply enough to actually inform the decision.

User Intuition’s AI-moderated interviews change the execution model for PE customer diligence. Each conversation maintains 5-7 levels of laddering depth with 98% participant satisfaction — the 50th interview is conducted with the same probing rigor as the first. There is no interviewer fatigue, no bias drift across sessions, and no variation in question depth. Customers engage more candidly with AI moderation because the social dynamics that suppress criticism in human conversations are eliminated. Sensitive topics — frustration with the vendor, consideration of competitors, internal politics around the purchasing decision — surface more readily.

The economics make customer diligence feasible for every deal, not just the largest ones. At $20 per interview, a 50-interview study costs $1,000 with synthesized findings in 48-72 hours. A 100-interview comprehensive study costs $2,000. Compare that to $75,000-$200,000 through expert networks like GLG, Tegus, or Guidepoint. The cost reduction means deal teams can run customer diligence on every prospective acquisition — pre-LOI for thesis formation, post-LOI for thesis validation, and post-close for portfolio monitoring.

For PE firms running customer research across portfolios, User Intuition’s Intelligence Hub stores every interview in a searchable repository. Cross-deal pattern recognition becomes possible. Category-level insights accumulate. The commercial due diligence capability compounds across the portfolio rather than expiring after each individual deal.

What to Do With the Responses?


Fifty interview transcripts are raw material. The value is in synthesis — identifying patterns across independently recruited customers, quantifying the distribution of sentiment for each thesis dimension, and converting individual stories into evidence that directly addresses the investment committee’s questions. The commercial due diligence template provides the complete analysis framework.

The essential synthesis steps: map each response to the thesis assumption it tests, aggregate responses to produce a confidence score for each assumption, quantify sentiment distribution rather than averages, flag disconfirming evidence as prominently as confirming patterns, trace every conclusion to verbatim customer quotes, and produce segment-level views (enterprise versus SMB, new versus long-tenured) that prevent misleading aggregation.

The output is a customer evidence appendix that sits alongside financial, legal, and commercial diligence — structured evidence from the only source that ultimately determines whether the revenue projections in the model will materialize. For firms that have experienced commercial due diligence failures driven by missed customer signals, the methodology described in this guide is the prevention framework.

For structuring CDD evidence into IC-ready presentations, see Presenting CDD Findings to Investment Committee. For adapted question methodology for growth-stage targets, see Customer Due Diligence for Growth Equity. For building these questions into a recurring portfolio program, see Customer Due Diligence Program for PE Portfolio.

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

Customer due diligence is independent, structured research with a target company's customers — recruited without the target's involvement — to validate investment theses. It goes beyond reference calls to assess customer loyalty, competitive positioning, pricing power, churn risk, and growth potential. It complements financial, legal, and commercial diligence with the unfiltered voice of the customer.
Fifty or more independent interviews is the minimum for reliable pattern recognition. This provides enough volume to segment by customer type, tenure, and satisfaction level. Compare that to 3-5 reference calls that provide a curated, unrepresentative sample. AI-moderated platforms like User Intuition make 50+ interviews practical within deal timelines at $20 per interview.
Independent recruitment means sourcing participants from a 4M+ panel or public customer databases — not the target company's curated reference list. Multi-layer screening verifies actual product usage, employment at the target's customer companies, and decision-making authority. This ensures candor that reference calls cannot provide.
AI-moderated customer due diligence through User Intuition completes in 48-72 hours — from recruitment through synthesized findings. Traditional consulting firm research takes 4-8 weeks for 15-20 interviews. This speed means customer evidence can inform go/no-go decisions rather than arriving after the deal has closed.
Reference calls use 3-5 customers hand-selected by the target company to present a favorable narrative. Customer due diligence interviews 50+ customers recruited independently, using structured methodology designed to surface both strengths and risks. Reference call satisfaction scores run 30-40% higher than independently recruited interviews for the same company.
PE customer due diligence questions should cover six areas: relationship context, loyalty and retention, competitive moat, pricing power, churn risk, and growth ceiling. Start with broad relationship questions to avoid priming, then ladder into specific thesis-testing territory. Each question should be followed by 4-5 levels of probing to reach the actual drivers behind surface-level satisfaction.
Replace closed-ended prompts with open-ended alternatives. Instead of 'Are you satisfied with the product?' ask 'Walk me through your experience as a customer from the beginning to today.' Instead of 'Would you switch to a competitor?' ask 'When was the last time you seriously considered an alternative?' Leading questions confirm the thesis — open-ended questions test it.
Laddering is a probing technique that moves from surface-level answers to root motivations through successive follow-up questions. A customer who says they stay because of 'the product' might reveal through laddering that the real driver is switching cost inertia. Laddering typically requires 4-5 probes and is the difference between confirming assumptions and actually testing them.
Traditional consulting firm customer diligence costs $75,000-$200,000 for 15-20 interviews over 4-8 weeks. AI-moderated platforms like User Intuition deliver 50+ interviews at $20 each — a 50-interview study costs $1,000 with results in 48-72 hours. The cost reduction makes it feasible to run customer diligence on every deal, not just the largest ones.
AI-moderated customer due diligence achieves 98% participant satisfaction and maintains consistent probing depth across all interviews. User Intuition conducts 50+ interviews simultaneously with 5-7 levels of laddering, delivering synthesized findings in 48-72 hours. Customers speak more candidly about vendor weaknesses and switching considerations without the social pressure of a human interviewer.
Commercial due diligence is the broader category covering market sizing, competitive analysis, and growth potential assessment. Customer due diligence is the primary research component — structured interviews with actual customers to validate the assumptions underlying the commercial thesis. For a detailed comparison, see the distinction between commercial and financial due diligence.
Map each interview response to the specific thesis assumption it tests, then aggregate across 50+ interviews to produce confidence scores for each assumption. Quantify sentiment distribution rather than averages, flag disconfirming evidence prominently, and trace every conclusion to verbatim customer quotes. The output is a customer evidence appendix alongside financial and legal diligence.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours