← Insights & Guides · Updated · 10 min read

Presenting CDD Findings to Investment Committee

By Kevin, Founder & CEO

The investment committee memo is the point of maximum leverage for customer due diligence. It is where 50-200 customer interviews either influence a capital allocation decision worth tens or hundreds of millions of dollars, or sit in an appendix that committee members flip past on the way to the financial model.

The difference between influence and irrelevance is not the quality of the research. It is how the research is presented. Most CDD deliverables are structured as research reports — organized by theme, summarized by finding, illustrated with quotes. Research reports are useful for the deal team. They are not useful for the IC.

IC members evaluate investment theses. They assess risk. They make go/no-go decisions. They set price parameters. They define post-close priorities. They need customer evidence mapped to these decisions, not organized by research methodology.

This guide covers how to transform raw CDD data into an IC presentation that drives decisions. For the underlying CDD methodology, see the complete guide to customer research for private equity. For the questions that generate IC-grade evidence, see 50 customer due diligence questions for PE.

Why Most CDD Presentations Fail at the IC Level?


Three structural problems prevent most CDD findings from influencing investment decisions.

Problem 1: Research structure instead of decision structure

A typical CDD report organizes findings by research theme: Customer Satisfaction, Competitive Landscape, Pricing Perception, Retention and Churn, Growth Potential. Each section summarizes what customers said about that topic.

This structure makes sense for the researcher. It is meaningless for the IC.

IC members do not evaluate “customer satisfaction” in the abstract. They evaluate whether the thesis assumption that “high customer satisfaction supports premium pricing and 95% retention” is supported by evidence. The satisfaction data matters only in the context of the thesis it is supposed to validate.

Problem 2: Averages instead of distributions

CDD reports typically present averages: “Average satisfaction score of 7.2 out of 10.” “NPS of 42.” “78% of customers plan to renew.”

Averages obscure the distributions that matter. An average satisfaction of 7.2 could mean everyone is at 7 (stable, predictable) or that half the base is at 9 and half is at 5 (bimodal, with a fragile segment). An NPS of 42 could reflect universal moderate promoters or a polarized base of passionate advocates and vocal detractors.

IC members need distributions, not averages. Segment-level analysis reveals where risk concentrates and where value creation opportunities exist.

Problem 3: Evidence without implication

Customer evidence without explicit deal implications requires the IC to do the analytical work of connecting research findings to deal decisions. Most IC members will not do that work in real time during a committee meeting.

Every finding must be paired with its implication: “35% of mid-market customers are evaluating alternatives. Implication: mid-market churn may be 2-3x the rate assumed in the model. Revenue impact: $2.1M annual risk in a segment representing 28% of ARR.”

What Is the IC-Ready CDD Framework?


An effective IC CDD presentation has four components, each designed for a different decision-making function.

Component 1: The Thesis Validation Matrix

The thesis validation matrix is the centerpiece of the IC presentation. It maps every investment thesis assumption to customer evidence.

Structure:

Thesis AssumptionEvidence StrengthSupporting SignalDisconfirming SignalConfidenceModel Impact
Retention is product-drivenMedium62% cite product quality as primary retention driver38% cite contractual lock-in or switching cost inertiaMediumIf lock-in-driven retention converts to product-driven churn at renewal, model churn at 12-15% vs. current 8%
Pricing power supports 10% annual increasesLowEnterprise (>$100K ARR) shows low price sensitivityMid-market ($20K-$50K ARR) shows high sensitivity; 28% cite price as primary concernLow for mid-market, High for enterpriseSegment pricing strategy required; blanket 10% increase would accelerate mid-market churn
Competitive moat is defensibleHigh71% cannot name a viable alternativeAmong customers acquired in last 12 months, 34% evaluated 2+ alternativesHigh for installed base, Medium for new cohortsMoat is real but narrowing; competitive dynamics are shifting in recent cohorts

How to build it:

  1. Start with the thesis. Before designing the CDD study, extract every testable assumption from the investment thesis and deal model. Common assumptions: retention rate, pricing power, competitive defensibility, expansion revenue potential, customer concentration risk.

  2. Map interview data to assumptions. For each assumption, identify the specific interview questions and data points that test it. Not all customer evidence is relevant to every assumption.

  3. Quantify both directions. For each assumption, present both supporting and disconfirming evidence with sample sizes. “62% of 143 customers cite product quality as their primary retention driver” is an IC-grade data point. “Customers generally seem satisfied” is not.

  4. Assign confidence levels. High confidence: strong signal from representative sample with consistent pattern. Medium confidence: directionally clear but with meaningful variance or small subsample. Low confidence: mixed signals or insufficient data to draw conclusions.

  5. State model implications explicitly. Every finding should connect to a specific financial model input. If the thesis assumes 8% churn and customer evidence suggests 12-15%, state that directly.

Component 2: The Risk Register

The risk register catalogs every material risk surfaced through customer evidence, ranked by severity and mitigability.

Structure for each risk:

  • Risk description. One sentence stating the risk clearly.
  • Evidence base. The customer data supporting the risk identification, with sample size and confidence level.
  • Severity. Impact on deal economics if the risk materializes. Quantified in revenue, margin, or multiple impact terms.
  • Mitigability. Can this risk be addressed post-close through operating changes? Some risks are fixable (product gaps, pricing structure, support quality). Others are structural (market shift, competitive displacement, regulatory change).
  • Timeline. When would this risk manifest? Near-term (0-12 months), medium-term (1-3 years), or long-term (3+ years)?
  • Representative verbatim. Two to three customer quotes that illustrate the risk in customer language.

Example entry:

Risk: Mid-market pricing sensitivity threatens retention at current growth plan price points

Evidence: 28% of mid-market customers ($20K-$50K ARR) cite price as their primary concern. 14% have actively priced competitive alternatives in the last 6 months. Among mid-market customers acquired in the last 12 months, pricing concern rises to 41%.

Severity: Mid-market represents $18M ARR (28% of total). A 5% incremental churn from pricing sensitivity = $900K annual revenue at risk. At planned price increase, modeled risk rises to $1.8-$2.7M.

Mitigability: HIGH. Segment-specific pricing strategy (hold mid-market prices, increase enterprise) could address the risk. Product feature differentiation that justifies the premium is an alternative approach.

Timeline: Near-term. The planned Year 1 price increase would trigger the risk within 6 months of implementation.

Verbatim: “We looked at [competitor] last quarter when our renewal came through with a 12% increase. Their pricing is 30% below and the product covers 80% of what we need.” — Mid-market customer, 18-month tenure

Component 3: The Segment Analysis

IC members think in segments because different customer segments carry different economic profiles, risks, and value creation opportunities.

Present CDD findings segmented by the dimensions that matter for the deal model:

By ARR tier:

  • Enterprise (>$100K): Retention intent, pricing sensitivity, competitive consideration
  • Mid-market ($20K-$100K): Same dimensions, often very different findings
  • SMB (<$20K): Often the riskiest segment, with highest churn and lowest switching costs

By tenure:

  • Long-tenured (3+ years): Loyalty drivers, product dependency, expansion usage
  • Mid-tenure (1-3 years): Satisfaction trajectory, competitive awareness
  • Recent (< 1 year): Acquisition quality, early satisfaction, onboarding experience

By engagement level:

  • Power users: Product advocacy, expansion potential, feature requests
  • Standard users: Satisfaction baseline, competitive consideration
  • Low-engagement: Churn risk, value perception, switching triggers

The segment analysis reveals where value concentrates and where risk hides. An overall NPS of 45 might comprise enterprise NPS of 65 and SMB NPS of 15 — a distribution with very different implications than a uniform 45.

Component 4: The Customer Evidence Appendix

The appendix provides full traceability for IC members who want to verify conclusions or explore specific findings.

Contents:

  1. Methodology overview. Sample design, recruitment methodology (independent from 4M+ panel), interview methodology (AI-moderated with 5-7 level laddering), analysis framework. One page.

  2. Full statistical tables. Every quantified finding with sample sizes, confidence intervals, and segment breakdowns. Five to ten pages.

  3. Risk register detail. Extended version of the risk register with complete verbatim evidence for each risk.

  4. Intelligence Hub access. For firms using a CDD program with an Intelligence Hub, provide committee members with access to search individual transcripts and explore findings interactively.

Presentation Techniques That Drive Decisions


Lead with the strongest challenge

IC presentations that lead with positive findings lose credibility. Committee members know the deal team wants the deal to happen. If the first ten minutes of the CDD presentation are positive, committee members discount everything that follows.

Lead with the most significant risk or thesis challenge. “Our thesis assumes 95% net retention. Customer evidence supports 85-90%. Here is why, and here is what we would need to do post-close to close the gap.” This establishes credibility and frames the rest of the presentation as evidence-based rather than advocacy.

Use the verbatim-statistic pair

The most effective evidence presentation pairs a quantitative finding with a representative verbatim quote.

Weak: “35% of customers are evaluating alternatives.”

Strong: “35% of customers are evaluating alternatives. As one enterprise customer put it: ‘We completed a competitive evaluation in Q3. [Competitor] covers 80% of what we need at 40% of the cost. The only reason we have not switched is the migration effort. If they release their API integration — which they have told us is in Q2 — we will move.’”

The statistic provides scale. The verbatim provides mechanism. Together, they create actionable intelligence.

Quantify uncertainty explicitly

Do not present findings as certainties. Present them as probabilities with evidence strength ratings.

Weak: “Customers will accept a 10% price increase.”

Strong: “Enterprise customers (n=47) show low pricing sensitivity — 83% indicate willingness to absorb moderate increases. Mid-market customers (n=38) show high sensitivity — 41% cite price as a primary concern and 14% have priced alternatives in the last 6 months. Confidence: HIGH for enterprise tolerance, LOW for mid-market tolerance. Recommendation: segment-specific pricing strategy rather than blanket increase.”

Map to financial model inputs

Every finding should connect to a specific line in the deal model. IC members think in IRR, MOIC, and revenue multiples. Customer evidence becomes decisive when it changes these numbers.

Framework:

  • Customer retention evidence → churn rate assumption → revenue projection → exit multiple → IRR impact
  • Pricing sensitivity evidence → price increase assumption → margin projection → EBITDA → entry multiple negotiation
  • Competitive dynamics evidence → market share assumption → growth rate → revenue trajectory → value creation plan
  • Customer concentration evidence → revenue concentration risk → discount rate → risk-adjusted return

When the CDD presentation shows that customer evidence reduces the modeled retention rate from 95% to 90%, and that 5% delta reduces IRR by 300 basis points, the customer evidence becomes the conversation.

Common IC Questions and How to Answer Them


”How representative is this sample?”

Answer: “We interviewed [N] customers, independently recruited from a 4M+ panel without the target’s involvement. The sample covers [X]% of the customer base by revenue and is stratified across [segments]. Recruitment was random within strata — no management involvement in participant selection. The sample is more representative than any reference call program and statistically significant at the 95% confidence level for the key findings."

"Is the AI interview methodology reliable?”

Answer: “AI-moderated interviews achieve 98% participant satisfaction and use 5-7 levels of systematic laddering — the same probing methodology used by McKinsey and Bain in human-moderated research. The consistency advantage is that every interview follows the same methodology, eliminating the interviewer variability that affects human-moderated studies. Customers speak more candidly about vendor weaknesses with AI moderation because there is no social pressure to be polite."

"How do we know customers are being honest?”

Answer: “Three structural features promote candor: independent recruitment (customers do not know who commissioned the research), blind moderation (the AI does not reveal the sponsor), and multi-layer fraud prevention (bot detection, consistency checks, duplicate suppression). Additionally, independently-recruited customer satisfaction scores run 30-40% lower than management-provided reference calls for the same company — evidence that the methodology surfaces critical perspectives that curated references do not."

"This contradicts what management told us”

Answer: “That is precisely the value. Management has structural incentives to present favorable customer narratives. Independent customer evidence provides an unbiased baseline. The gap between management narrative and customer reality is itself a finding — it indicates either that management lacks visibility into customer sentiment or that they are intentionally presenting a rosier picture. Both are relevant to the investment decision.”

The Exit Multiplier: CDD Evidence in Seller Diligence


CDD findings do not only influence buy-side decisions. For portfolio companies approaching exit, accumulated CDD evidence becomes a powerful asset in the sale process.

When a buyer’s diligence team requests customer evidence, a portfolio company that has been running a systematic CDD program can provide:

  • Four years of independently-measured NPS trends showing consistent improvement
  • Quarterly retention data validated by customer interviews, not just financial metrics
  • Competitive positioning evidence showing strengthening moat over time
  • Customer expansion intent data supporting the growth narrative

This evidence package shortens buyer diligence, increases bid confidence, and directly supports exit multiple negotiations. A buyer presented with rigorous, independent, longitudinal customer evidence will price the asset differently than a buyer who receives management-provided reference calls.

Getting Started


The path from presenting CDD as a supplementary appendix to presenting it as the analytical backbone of the IC memo starts with the thesis validation matrix. Before commissioning the next CDD study, extract every testable assumption from the investment thesis. Design the study to test those assumptions. Structure the deliverable around the matrix, not around research themes.

The customer evidence platform you use matters. User Intuition delivers 50-200 independently-recruited customer interviews in 48-72 hours, with 5-7 level laddering methodology designed to produce IC-grade evidence. The Intelligence Hub provides full traceability from IC memo conclusions to individual customer transcripts.

Start a free study to see how AI-moderated CDD evidence compares to your current reference call process, or book a demo to discuss how customer evidence can strengthen your next IC presentation.

Frequently Asked Questions

CDD findings should be organized around thesis assumptions, not research themes. Each thesis assumption (retention is durable, pricing power exists, competitive moat is defensible) gets an evidence section showing: the assumption stated, the customer evidence for and against, a confidence score based on interview data, and the implication for the deal model. IC members evaluate assumptions, not research themes.
A thesis validation matrix maps each investment thesis assumption to specific customer evidence. For each assumption, the matrix shows: the assumption, supporting evidence (with sample size and verbatim), disconfirming evidence, confidence level (high/medium/low based on signal strength and sample representation), and the recommended model adjustment. This is the core analytical framework that converts customer interviews into IC-actionable intelligence.
Fifty independently-recruited interviews is the minimum threshold for IC credibility. Below 50, IC members will question statistical relevance. At 100-200, segmented analysis by customer type, tenure, and size becomes possible. The independence of recruitment matters as much as the count -- 50 independently-recruited interviews carry more IC weight than 200 management-supplied reference calls.
Disconfirming evidence should be presented prominently, not buried. The most credible CDD presentations lead with the strongest challenge to the thesis and then present the full evidence picture. IC members trust presentations that acknowledge risks over those that only present positive signals. For each risk, include: the evidence, the severity, the mitigability (can the risk be addressed post-close?), and the model impact.
Verbatim quotes should be used sparingly and strategically -- selected to illustrate patterns, not to cherry-pick positive sentiment. The most effective use is pairing a quantitative finding with 2-3 representative verbatim that make the number real. When 35% of customers express switching intent, three verbatim describing why they are considering alternatives makes the statistic vivid and actionable.
The customer evidence appendix is a structured supplement to the IC memo that provides full traceability from conclusions to raw data. It includes: methodology description (sample design, recruitment independence, interview methodology), full statistical tables, segment-level analysis, risk register with supporting verbatim, and links to the Intelligence Hub for committee members who want to review individual transcripts.
CDD findings create evidence-backed adjustments to financial model assumptions. If customer interviews reveal higher switching intent than the model assumes, the churn rate input is adjusted upward. If pricing sensitivity is concentrated in a specific segment, the revenue growth assumption for that segment is discounted. Each adjustment is traceable to specific customer evidence, making the price conversation data-driven rather than negotiation-driven.
Yes -- and that is their highest value. When 40% of independently-recruited customers report active competitive evaluation, when NPS is 34 versus the management-reported 72, when the primary retention driver is contractual lock-in rather than product satisfaction -- these findings should kill deals or fundamentally reshape pricing. The cost of one avoided bad deal pays for a decade of CDD programs.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours