← Insights & Guides · Updated · 20 min read

Commercial Due Diligence Template: Interview Framework for PE

By Kevin, Founder & CEO

A commercial due diligence template is a structured framework that converts investment thesis assumptions into testable customer hypotheses, defines the interview methodology for validating those hypotheses at scale, and provides the scoring and synthesis architecture for translating raw customer conversations into investment committee-ready findings. It is the operational blueprint that sits between the deal team’s thesis and the customer evidence that confirms or challenges it.

Most deal teams approach commercial due diligence as a market analysis exercise — TAM sizing, competitive landscape mapping, industry trend assessment. These are necessary but insufficient. For a comprehensive overview of the entire CDD discipline, see the complete guide to commercial due diligence. They tell you what the market looks like from the outside. They do not tell you what the target company’s customers actually think, intend to do, or are willing to pay. The commercial due diligence template in this guide is designed specifically for the customer evidence layer — the primary research that validates whether the revenue the financial model projects will actually materialize.

For the complete PE customer research framework — from pre-LOI thesis validation through portfolio monitoring and exit preparation — see the complete guide to customer research for private equity.

Why Most CDD Templates Fail?


The standard commercial due diligence template circulating through deal teams and consulting firms was designed for a different era and a different purpose. It is built around secondary research: market sizing, competitive positioning maps, industry growth rates, regulatory landscape assessment. These are desk research exercises. They aggregate publicly available data into a narrative about market attractiveness.

The problem is not that secondary research is wrong. It is that secondary research cannot answer the questions that actually determine deal outcomes.

No amount of TAM analysis tells you whether the target’s customers plan to renew next year. No competitive landscape map reveals that 40% of the customer base is actively evaluating a competitor’s product. No industry report surfaces the fact that the target’s largest customer segment considers the product a commodity and would switch for a 15% price reduction.

These are primary research questions. They require talking to customers — independently, at scale, with structured methodology.

The customer voice is the missing layer in most commercial due diligence. And the reason it is missing is operational, not strategic. Every deal team knows that customer evidence matters. The constraint has always been time and cost. Traditional customer research takes 4-8 weeks and costs $75,000-$200,000 through a consulting firm. Deal timelines do not accommodate that. So deal teams default to 3-5 management-curated reference calls, check the box, and move on.

This template is designed for a different operating model: 50-200 independent customer interviews completed in 72 hours. AI-moderated interviews at $20 each make this scale practical within deal timelines and budgets. For a deeper look at how AI is transforming the speed and depth of commercial due diligence, see AI-powered commercial due diligence. The template below provides the structure to ensure those interviews produce actionable, IC-ready evidence.

What Is the Customer Interview Framework?


The framework has six components, each building on the previous. Together, they form the complete workflow from investment thesis to IC memo.

1. Thesis Definition Worksheet

Every commercial due diligence study begins with the investment thesis. The thesis definition worksheet translates abstract investment assumptions into specific, testable customer hypotheses.

Process:

Start with each core thesis assumption. For every assumption, define the customer behavior or perception that would need to be true for the assumption to hold. Then define the counter-evidence that would challenge it.

Example mapping:

Thesis AssumptionCustomer HypothesisValidation EvidenceChallenge Evidence
90%+ net revenue retentionCustomers plan to renew and expand85%+ of customers report renewal intent; 40%+ report expansion plansSignificant cohort evaluating alternatives; price sensitivity in renewal discussions
Strong competitive moatCustomers perceive high switching costsCustomers cite integration depth, workflow dependency, data migration concernsCustomers view product as interchangeable; competitors offering migration support
Pricing power supports margin expansionCustomers would absorb 10-15% price increaseMajority report value exceeds cost; willingness to pay higher if features improvePrice is top-3 decision factor; customers benchmark against cheaper alternatives
Large upsell opportunity in existing baseCustomers have unmet needs the product could addressCustomers identify adjacent problems; willingness to consolidate vendorsCustomers prefer best-of-breed; distrust vendor expansion into new areas
Low customer concentration riskRevenue distributed across diverse segmentsNo single segment accounts for >20% of sentiment varianceSmall number of vocal champions drive disproportionate advocacy

Deliverable: A completed worksheet with 5-8 thesis assumptions, each mapped to specific customer hypotheses with defined validation and challenge criteria. This worksheet becomes the backbone of the interview guide — every question in the interview traces back to a thesis assumption.

2. Sample Plan Template

The sample plan determines who you interview. A flawed sample plan produces biased evidence regardless of how good the interview guide is.

Segmentation dimensions:

Customer size:

  • Enterprise (top 20% by revenue contribution)
  • Mid-market (middle 60%)
  • SMB / long-tail (bottom 20%)

Tenure:

  • New customers (less than 12 months)
  • Established customers (1-3 years)
  • Long-term customers (3+ years)

Satisfaction proxy (if available from NPS, CSAT, or support data):

  • Promoters (high satisfaction / advocacy)
  • Passives (neutral)
  • Detractors (low satisfaction / at-risk)

Geography (for multi-market targets):

  • Core markets
  • Growth markets
  • New / emerging markets

Minimum sample sizes:

SegmentMinimum InterviewsPurpose
Enterprise10-15Revenue concentration risk, strategic relationship depth
Mid-market20-30Core business health, competitive dynamics
SMB / long-tail10-15Scalability of value proposition, churn patterns
New customers (<12 months)10-15Onboarding quality, early satisfaction, initial competitive perception
Long-term customers (3+ years)10-15Loyalty durability, complacency risk, evolution of needs
Known detractors (if identifiable)5-10Root cause of dissatisfaction, switching triggers, competitive alternatives under consideration

Total minimum: 50 interviews. For high-value deals or complex customer bases, target 100-150.

Critical principle: Independent recruitment. Customers must be recruited from a third-party panel or through independent outreach — never through the target company. If the target company selects or influences which customers participate, you have reference calls, not research. The target company should not know which customers were interviewed. This is what separates commercial due diligence from reference checking. For the full methodology behind independent recruitment and structured CDD programs, see the commercial due diligence solution page.

3. Interview Guide Template

The interview guide structures every conversation to produce comparable, analyzable data while leaving room for the unexpected insights that make qualitative research valuable. The guide follows a four-phase structure.

Phase 1: Warm-Up (2-3 minutes)

Purpose: Establish context, build rapport, capture baseline relationship data.

  • How long have you been using [product/company]?
  • What was the original reason you chose them?
  • How would you describe your current usage — is it growing, stable, or declining?
  • Who in your organization are the primary users?

These questions are not throwaways. They establish the respondent’s vantage point and provide segmentation data that enriches the analysis. A customer who has been with the target for five years and is increasing usage provides very different signal than a customer of six months with flat adoption.

Phase 2: Core Thesis Probes (10-15 minutes)

Purpose: Directly test investment thesis assumptions.

Each probe maps to a row in the thesis definition worksheet. Structure the questions from open-ended to specific — never lead with the hypothesis you are testing.

Retention and loyalty probes:

  • When your contract comes up for renewal, what goes through your decision process?
  • Have you evaluated any alternatives in the past 12 months? What prompted that evaluation?
  • If you were starting from scratch today, would you choose [product] again? Why or why not?

Competitive positioning probes:

  • When you think about alternatives to [product], what comes to mind?
  • What does [product] do better than anything else you have used? Where does it fall short?
  • If a competitor offered to migrate you at no cost, what would make you consider it? What would make you stay?

Pricing and value probes:

  • How do you think about the value you get relative to what you pay?
  • If the price increased by 15% at your next renewal, how would that affect your decision?
  • Are there features or capabilities you would pay more for? What are they?

Growth and expansion probes:

  • Are there problems adjacent to what [product] solves that you wish they would address?
  • Have you expanded your usage in the past year? Do you plan to?
  • If [product] launched [hypothesized new capability], would that change how much you spend with them?

Phase 3: Depth Laddering (10-15 minutes)

Purpose: Move beyond surface responses to uncover root motivations, latent risks, and non-obvious patterns.

This is where structured methodology separates due diligence interviews from casual conversations. The 5-7 level laddering technique involves following each significant response with progressive “why” and “what” probes until the root driver is exposed.

Example ladder:

  • Level 1: “We are looking at switching to [competitor].” — Why?
  • Level 2: “Their reporting is better.” — What specifically about their reporting?
  • Level 3: “We can build custom dashboards without IT involvement.” — Why does that matter?
  • Level 4: “Our IT team has a 3-week backlog for any dashboard request.” — What happens during those 3 weeks?
  • Level 5: “We make decisions based on outdated data, or we do not make them at all.” — What is the business impact?
  • Level 6: “We missed a regional demand shift last quarter that cost us approximately $2M in inventory write-downs.” — And that is why self-serve reporting drives your evaluation criteria?
  • Level 7: “Yes — this is not about reporting preference. It is about operational speed. Whoever gives us real-time visibility wins our budget.”

Level 1 sounds like a feature gap. Level 7 reveals the actual decision driver: operational speed and financial impact of information latency. The thesis implication shifts from “invest in better reporting” to “the competitive moat depends on reducing customer time-to-insight.” User Intuition’s AI moderator applies this 5-7 level laddering automatically and consistently across every interview, which is what makes it possible to achieve this depth across 50-200 conversations in 72 hours. For more on constructing effective laddering sequences, see the customer due diligence question guide.

Phase 4: Close (3-5 minutes)

Purpose: Capture quantitative anchors and forward-looking intent.

  • On a scale of 0-10, how likely are you to recommend [product] to a colleague? (NPS)
  • On a scale of 1-5, how likely are you to still be a customer in 2 years?
  • If you had to predict, would your spending with [product] increase, stay the same, or decrease over the next 12 months?
  • Is there anything about your experience that we have not covered that you think is important?

The final open-ended question consistently surfaces unexpected insights. Do not skip it.

4. Scoring Rubric

Qualitative data becomes actionable for deal teams when it is quantified consistently. The scoring rubric translates interview evidence into comparable metrics across the entire sample.

Four scoring dimensions:

Retention Risk (1-5)

  • 1 = Strong advocate, no alternatives considered, multi-year commitment intent
  • 2 = Satisfied, no active evaluation, but open to alternatives if approached
  • 3 = Neutral, periodic competitive evaluation, decision driven by pricing at renewal
  • 4 = Dissatisfied on specific dimensions, actively aware of alternatives, would switch if migration were easier
  • 5 = Actively evaluating alternatives, switching timeline defined, specific competitor identified

Competitive Vulnerability (1-5)

  • 1 = No credible alternative identified, deep workflow integration, high perceived switching cost
  • 2 = Aware of alternatives but sees meaningful differentiation in target’s favor
  • 3 = Considers target and 1-2 alternatives roughly equivalent
  • 4 = Perceives competitor advantage on key dimensions, switching cost is the primary retention factor
  • 5 = Competitor perceived as superior, migration plan in progress or recently completed evaluation

Growth Potential (1-5)

  • 1 = Contracting usage, reducing seats/licenses, eliminating use cases
  • 2 = Stable usage, no expansion plans, budget flat
  • 3 = Stable with modest expansion potential if triggered by new features or pricing
  • 4 = Actively expanding usage, adding teams/use cases, budget increasing
  • 5 = Strategic platform, enterprise-wide expansion planned, multi-year growth trajectory

Champion Dependency (1-5)

  • 1 = Broad organizational adoption, value recognized across functions, no single advocate required
  • 2 = Multiple champions across departments, loss of one would not threaten the account
  • 3 = 2-3 key champions, organizational awareness beyond them but shallow
  • 4 = Single strong champion, limited awareness outside their team, risk if champion leaves
  • 5 = Entirely dependent on one individual, organization unaware of product value, champion departure would likely trigger churn

Aggregate Deal Risk Score:

Calculate the weighted average across all interviews, with weights adjusted by customer segment importance (typically weighted by revenue contribution).

Aggregate ScoreRisk LevelImplication
1.0 - 1.9LowThesis-consistent; customer base is a genuine asset
2.0 - 2.4Moderate-LowGenerally healthy; monitor specific segment risks
2.5 - 3.0ModerateMixed signals; thesis holds for some segments, challenged in others
3.1 - 3.5Moderate-HighMaterial risks identified; thesis requires adjustment
3.6 - 5.0HighThesis-challenging evidence; deal structure or pricing should reflect customer risk

5. Synthesis Framework

Fifty or more interviews produce a volume of qualitative data that overwhelms ad hoc analysis. The synthesis framework provides a structured method for aggregating individual conversations into segment-level and portfolio-level findings.

Step 1: Thematic coding

Review every interview and tag responses against the thesis hypotheses from the definition worksheet. Each response either validates, challenges, or is neutral to a specific hypothesis. Track the count and strength of evidence on each side.

Step 2: Segment-level pattern analysis

Aggregate coded themes by segment (customer size, tenure, geography, satisfaction). The most valuable findings are typically segment-level divergences:

  • Enterprise customers are loyal, but mid-market customers are actively evaluating competitors
  • New customers (under 12 months) report strong onboarding, but customers at the 18-24 month mark report declining engagement
  • North American customers see the product as best-in-class, but European customers perceive a local competitor as superior

These divergences are invisible in aggregate data. They only emerge when the sample is large enough to support segment-level analysis — which is why 50+ interviews is the minimum, not a stretch goal.

Step 3: Outlier identification

Flag responses that fall outside the dominant pattern. Outliers are not noise — they are often early signals. A single enterprise customer describing a competitive evaluation that no other customer mentions may represent the leading edge of a trend that has not yet reached the broader base. Document outliers separately with full context.

Step 4: Contradiction analysis

Compare customer evidence against management claims. Where do they align? Where do they diverge? Divergence is not automatically negative — management may have a forward-looking perspective that customers have not yet experienced — but every material divergence requires explanation.

Common contradiction patterns:

  • Management claims best-in-class customer support; customers report 48-hour average response times
  • Management projects 120% net retention; customer interviews reveal a cohort of mid-market accounts planning to downgrade
  • Management positions product as platform play; customers view it as a point solution and resist expanding usage

Step 5: Confidence assessment

Rate your confidence in each finding based on evidence strength:

  • High confidence: Consistent evidence across 70%+ of relevant interviews, supported by multiple segments
  • Medium confidence: Evidence from 40-70% of relevant interviews, or strong evidence in some segments but absent in others
  • Low confidence: Emerging signal from less than 40% of interviews, potentially influenced by a small number of vocal respondents

6. IC Memo Format

The IC memo is the deliverable that translates 50+ customer interviews into a format investment committee members can consume in 15-20 minutes and use to make capital allocation decisions.

Structure:

I. Executive Summary (1 page)

  • Go / no-go recommendation with confidence level (high, medium, low)
  • Three to five headline findings, each in one sentence
  • Aggregate deal risk score with brief interpretation
  • Recommended bid adjustment (if applicable): premium, as-modeled, discount, restructure

II. Methodology and Sample

  • Number of interviews completed, time frame, moderation method
  • Sample composition by segment (table format)
  • Recruitment methodology (independent vs. company-assisted — always independent)
  • Response rate and any notable recruitment challenges

III. Thesis-by-Thesis Validation

For each thesis assumption from the definition worksheet:

  • Thesis statement: What the deal team assumed
  • Customer evidence: What the interviews revealed (with quantified support — e.g., “34 of 52 customers reported…”)
  • Assessment: Validated / Partially Validated / Challenged / Insufficient Evidence
  • Key verbatims: 2-3 direct customer quotes that capture the finding (anonymized)
  • Implication for deal: What this means for valuation, deal structure, or post-acquisition priorities

IV. Risk Matrix

RiskLikelihood (1-5)Severity (1-5)Risk ScoreMitigant
Mid-market churn acceleration4312Invest in mid-market success program post-close
Competitor X gaining in EMEA3412Accelerate European product localization
Champion dependency in top 10 accounts3515Expand stakeholder mapping and multi-threading
Pricing resistance above current levels248Stage price increases; lead with value demonstration

V. Customer Evidence Appendix

Organized by theme, not by individual interview. Each theme includes:

  • Summary finding
  • Supporting data (score distribution, segment breakdown)
  • Representative verbatims (3-5 per theme, anonymized)
  • Contradictions or nuances

VI. Recommended Adjustments

Specific, actionable recommendations:

  • Bid price adjustment with rationale
  • Deal structure modifications (earnouts, holdbacks tied to retention metrics)
  • First 100-day priorities for the operating team
  • Monitoring metrics for the first year post-close

CDD Deal Timeline: 3 Weeks From Thesis to IC Memo


Commercial due diligence must operate within deal timelines. A 6-week research project that delivers after the bid deadline is worthless regardless of its quality. This timeline is designed for completion within a typical exclusivity window or competitive auction process.

Week 1: Design and Recruit

DayActivityMilestone
Day 1-2Complete thesis definition worksheet from CIM, management presentation, and preliminary financial model. Map 5-8 thesis assumptions to customer hypotheses.Thesis-to-hypothesis mapping complete
Day 2-3Design sample plan: define segments (size, tenure, satisfaction proxy), set minimum interview targets per segment, finalize screener criteria.Sample plan approved by deal team
Day 3-4Finalize interview guide: customize Phase 2 (core thesis probes) for this specific deal. Define scoring rubric weights by segment importance.Interview guide and scoring rubric finalized
Day 4-5Launch recruitment: independent panel recruitment against screener criteria. No contact with target company for participant selection.Recruitment underway; first participants confirmed

Week 2: Field Interviews

DayActivityMilestone
Day 6-8Conduct 50-200 AI-moderated interviews. With User Intuition, interviews run asynchronously — participants complete at their convenience, producing more thoughtful responses than scheduled calls.75%+ of target interviews completed
Day 8-9Begin preliminary coding and scoring as interviews complete. Identify emerging patterns and potential red flags for early escalation to the deal team.Preliminary pattern analysis available; red flags escalated
Day 9-10Complete remaining interviews. Run full thematic coding against thesis hypotheses. Calculate scoring rubric scores by segment.All interviews completed; full coding and scoring complete

Week 3: Synthesize and Deliver

DayActivityMilestone
Day 11-12Run synthesis framework: segment-level pattern analysis, outlier identification, contradiction analysis (customer evidence vs. management claims), confidence assessment.Synthesis complete; findings mapped to thesis assumptions
Day 12-13Draft IC memo: executive summary with go/no-go recommendation, thesis-by-thesis validation, risk matrix, customer evidence appendix, recommended bid adjustments.Draft IC memo circulated to deal team for review
Day 14-15Incorporate deal team feedback. Finalize IC memo. Prepare presentation-ready version for investment committee. Archive all transcripts and analysis in intelligence hub for post-close reference.Final IC memo delivered; customer evidence available for IC presentation

The critical enabler of this timeline is AI-moderated interviews. Traditional CDD customer research takes 4-8 weeks for 20-30 interviews. AI moderation completes 50-200 interviews in 72 hours at $20 each — which means the customer evidence arrives before the bid, not after.

CDD Stakeholder Mapping: Who Needs What


Commercial due diligence outputs serve four distinct audiences, each with different needs and different decision authority. Designing the deliverables with all four in mind ensures the customer evidence reaches the people who use it.

Deal Team (Partners and Principals)

  • What they need: The IC memo — thesis-by-thesis validation, aggregate risk score, go/no-go recommendation, and recommended bid adjustments. Deal team members make the investment recommendation and need the customer evidence distilled into a format that supports or challenges their thesis.
  • When they need it: Throughout the process. Preliminary findings (red flags, emerging patterns) should reach the deal team by mid-Week 2 so they can adjust their approach before the final memo. The completed IC memo is the Week 3 deliverable.
  • Key deliverable: IC memo with executive summary, risk matrix, and bid adjustment recommendations.

Operating Partners

  • What they need: The segment-level findings, the value creation hypotheses, and the first-100-day priorities. Operating partners translate CDD findings into the post-acquisition operating plan. They need to know which customer segments are healthy, which are at risk, and what operational investments will protect and grow the customer base.
  • When they need it: After the deal closes (or during final diligence if operating partners are involved pre-close). The CDD scoring baseline becomes their starting measurement for customer health.
  • Key deliverable: Segment-level risk assessment, value creation hypothesis mapping, and recommended operating priorities with customer evidence.

Portfolio Company Leadership (Post-Close)

  • What they need: The customer evidence that informs their first 90 days. What do customers love? What do they complain about? Where are the competitive vulnerabilities? What do customers wish the product did? Portfolio company CEOs inherit a customer base they did not build — CDD evidence gives them an honest external assessment that management self-reporting cannot provide.
  • When they need it: Day 1 post-close. The CDD findings should be packaged as an onboarding document for new portfolio company leadership.
  • Key deliverable: Customer health summary, competitive landscape from the customer perspective, and prioritized customer-identified improvement areas.

Board / Investment Committee

  • What they need: The executive summary and risk matrix. Board members evaluate deals across the portfolio and need the highest-level distillation: is the customer base an asset or a liability, what are the material risks, and how confident is the deal team in the thesis. Supporting evidence should be available but not presented unless requested.
  • When they need it: At the IC meeting. The one-page executive summary and risk matrix should be presentable in 5 minutes with supporting detail available for follow-up questions.
  • Key deliverable: One-page executive summary with aggregate risk score, top 3-5 findings, and go/no-go recommendation.

Adapting the Template by Deal Type


The core framework applies across deal types, but the emphasis shifts depending on the business model.

B2B SaaS

Primary focus: net revenue retention, competitive switching dynamics, expansion revenue potential. Probe deeply on contract renewal mechanics, multi-year commitment intent, usage-based vs. seat-based growth trajectories, and integration depth as a switching barrier. Pay special attention to the distinction between contractual retention (locked in) and voluntary retention (would choose to stay even without a contract).

Key questions to add:

  • How embedded is [product] in your daily workflows?
  • If your contract had no minimum commitment, would your usage change?
  • What would need to happen for you to consolidate more spending with [product]?

Consumer Brand

Primary focus: brand loyalty vs. habitual purchase, price sensitivity, channel dependency. Consumer interviews require larger samples (100+) because individual purchase decisions carry less signal than B2B relationships. Segment by purchase frequency, channel (DTC vs. retail vs. marketplace), and brand awareness source.

Key questions to add:

  • If [product] were not available at your usual store, would you seek it out or substitute?
  • What would a 20% price increase do to your purchase frequency?
  • How did you first learn about [product]? What keeps you buying it?

Marketplace / Platform

Primary focus: both supply-side and demand-side dynamics. You need two separate interview tracks — one for the participants who provide supply (sellers, drivers, hosts) and one for demand-side users (buyers, riders, guests). The platform’s value depends on the health of both sides.

Key questions to add (supply side):

  • What percentage of your income or business comes through [platform]?
  • Are you active on competing platforms? How do you allocate your supply?
  • What would cause you to reduce your activity on [platform]?

Key questions to add (demand side):

  • Do you compare options on multiple platforms before transacting?
  • What keeps you on [platform] versus alternatives?
  • How would a 10% fee increase change your behavior?

Healthcare

Primary focus: clinical workflow integration, regulatory compliance confidence, patient outcome perception, and procurement decision dynamics. Healthcare interviews involve multiple stakeholders for the same account — clinicians, administrators, IT, and procurement each hold different perspectives on the same product.

Key questions to add:

  • How does [product] fit into your clinical workflow? Could you practice without it?
  • How confident are you in [product]‘s compliance with current regulatory requirements?
  • Who in your organization would need to approve a switch to an alternative? How many people are involved in that decision?

Template for Post-Acquisition Baseline


The same framework, modified for a different objective. Post-acquisition, the goal is not thesis validation — the deal is done. The goal is establishing a day-1 customer sentiment baseline that the operating team can measure against over the hold period.

Modifications to the standard template:

Thesis worksheet becomes a value creation hypothesis worksheet. Instead of mapping investment assumptions, map the planned value creation initiatives to customer hypotheses. If the 100-day plan includes a price increase, the hypothesis is: “Customers will absorb a 10% increase without material churn.” If the plan includes a product integration with a sister portfolio company, the hypothesis is: “Customers see value in the integrated offering and would pay for it.”

Sample plan adds a time dimension. Design the sample so you can re-run the identical study at 6-month intervals. Consistency in methodology across waves is critical — if you change the questions or the sample design, you cannot compare results over time.

Scoring rubric becomes a tracking dashboard. The same four dimensions (retention risk, competitive vulnerability, growth potential, champion dependency) measured quarterly produce trend lines that alert the operating team to shifting customer sentiment before it appears in financial metrics.

IC memo becomes a board report. Same structure, but the audience shifts from deal committee to board and operating partners. The recommendations section shifts from bid adjustments to operational priorities.

The post-acquisition baseline is also the foundation for exit preparation. When the time comes to sell, having 2-3 years of consistent customer sentiment data — showing improvement trends — is a powerful asset in the exit narrative. It provides the next buyer with exactly the kind of customer evidence this template is designed to produce. PE firms using User Intuition’s Customer Intelligence Hub maintain this longitudinal data automatically — every conversation feeds a searchable database that makes trend analysis a query rather than a new research project.

This template provides the operational structure for running commercial due diligence that goes beyond market analysis to capture the customer evidence that actually predicts deal outcomes. The framework is methodology-agnostic — it works whether interviews are conducted by humans or AI moderators — but the scale it demands (50-200 interviews in 72 hours) is where AI-moderated research becomes a practical necessity rather than a theoretical preference.

For the complete methodology behind independent customer recruitment, AI-moderated laddering interviews, and how PE firms are building institutional research memory across portfolio companies, see the complete guide to customer research for private equity.

Frequently Asked Questions

A complete commercial due diligence template should include five components: (1) a thesis-to-hypothesis mapping worksheet that converts investment assumptions into testable customer questions, (2) a sample plan template for recruiting 50+ customers segmented by size, tenure, and satisfaction, (3) a structured interview guide with warm-up, thesis-specific probes, 5-7 level laddering, and close sections, (4) a scoring rubric that quantifies retention risk, competitive vulnerability,.
Structure due diligence interviews in four phases: warm-up (2-3 minutes on relationship history and usage context), core probes (10-15 minutes testing thesis-specific hypotheses), depth laddering (10-15 minutes using 5-7 level why-chains to uncover root motivations and risks), and close (3-5 minutes covering NPS, switching intent, and expansion willingness). Each phase serves a distinct analytical purpose.
Interview a minimum of 50 customers for robust commercial due diligence. This provides enough data for pattern recognition across segments -- by customer size (enterprise, mid-market, SMB), tenure (under 1 year, 1-3 years, 3+ years), and satisfaction level (promoter, passive, detractor). For high-value deals or complex customer bases, 100-200 interviews deliver segment-level statistical confidence.
Yes. The framework is designed for both human and AI moderation. AI-moderated interviews follow the same structured guide, apply consistent 5-7 level laddering to every conversation, and eliminate interviewer bias across hundreds of interviews. AI moderation is what makes the scale practical: 50-200 independent customer interviews completed in 72 hours, within deal timelines, at $20 per interview. The scoring rubric and synthesis framework work identically regardless of moderation method.
The most effective due diligence report for investment committees follows a thesis-validation structure: (1) executive summary with go/no-go recommendation and confidence level, (2) thesis-by-thesis validation or challenge with supporting customer evidence, (3) a risk matrix mapping likelihood and severity of identified risks, (4) direct customer verbatims organized by theme, and (5) recommended bid adjustments or deal structure modifications based on findings.
Scale the sample plan and depth, not the framework. For small deals (under $50M enterprise value), 30-50 interviews across 2-3 segments provide sufficient coverage -- focus on retention risk and competitive vulnerability since these drive the most deal-breaking findings. For mid-market deals ($50M-$500M), the standard 50-100 interviews with full segmentation apply. For large-cap or platform deals (over $500M), scale to 150-200+ interviews and add geographic and vertical segmentation.
In a competitive process, you may have 2-3 weeks instead of 4-6. Three tactics compress the timeline without sacrificing quality. First, use AI-moderated interviews to complete 50-200 conversations in 72 hours instead of 3-4 weeks. Second, pre-build the thesis definition worksheet from the CIM and management presentations before the customer interview window opens -- do not spend interview time on questions the CIM already answers.
CDD and financial DD should inform each other, not run in parallel silos. Map your CDD findings to specific financial model assumptions: if the model projects 90% net revenue retention, your CDD should directly test whether customers intend to renew and expand. If the model assumes 15% pricing power, your interviews should probe price sensitivity at that threshold. Share preliminary CDD findings with the financial DD team mid-process so they can adjust their sensitivity analysis.
Five red flags warrant immediate escalation to the deal team: (1) champion dependency -- when more than 40% of accounts rely on a single internal advocate whose departure would trigger churn, (2) competitor momentum -- when 30%+ of customers have actively evaluated a specific competitor in the past 12 months, (3) management-customer divergence -- when customer perceptions of product quality, support, or roadmap materially contradict management claims, (4) segment concentration risk --.
The CDD scoring rubric (retention risk, competitive vulnerability, growth potential, champion dependency) becomes your day-1 baseline. Re-run the identical study at 6 and 12 months post-close using the same discussion guide, the same scoring rubric, and the same segmentation. The delta between the CDD baseline and the post-acquisition measurement tells you whether the operating plan is working -- and surfaces emerging risks before they appear in financial metrics.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours