Commercial due diligence for SaaS acquisitions is the process of validating whether a target company’s retention metrics — net revenue retention, gross revenue retention, churn rate, and expansion revenue — reflect durable customer behavior or temporary conditions that will not survive the hold period. It is the difference between paying a 15x ARR multiple for a business with genuinely sticky customers and overpaying for a revenue base that is quietly eroding behind contractual walls.
SaaS valuations are mechanically driven by retention. A company with 130% NRR commands a fundamentally different multiple than one with 95% NRR, even at identical ARR. The financial model treats NRR as a forward-looking predictor of revenue growth from the existing base. But NRR is a lagging indicator. It tells you what happened last year. It does not tell you what will happen when multi-year contracts come up for renewal, when a well-funded competitor launches a migration tool, or when the champion who drove adoption at the target’s largest customer changes jobs.
The data room tells you what the metrics are. Customer interviews tell you what the metrics will be. For PE deal teams evaluating SaaS targets, that distinction is worth tens of millions of dollars in bid accuracy.
This guide covers the six SaaS-specific signals to test in commercial due diligence, how to design the interview guide, sample questions for each signal, and a framework for calculating customer-validated NRR from primary research — the number you actually underwrite, as opposed to the number management presents.
For the complete PE customer research framework — from pre-LOI thesis validation through portfolio monitoring — see the complete guide to customer research for private equity. For the interview template and scoring rubric, see the commercial due diligence template.
Why SaaS CDD Is Different from Traditional Commercial Due Diligence?
Traditional commercial due diligence evaluates market size, competitive dynamics, and growth trajectory. It is largely a secondary research exercise — industry reports, analyst coverage, management interviews, and a handful of customer reference calls. That framework was designed for asset-heavy businesses where revenue is driven by market position and operational execution.
SaaS businesses operate on a fundamentally different economic model. Revenue is subscription-based, which means the customer makes a new purchasing decision every contract cycle. The entire revenue base is theoretically at risk every year. Growth depends not just on acquiring new customers but on retaining and expanding existing ones. The unit economics — LTV, CAC payback, expansion revenue — all flow from retention behavior.
This changes what CDD needs to validate.
In a traditional acquisition, the core commercial question is: “Is the market attractive and is the company well-positioned?” In a SaaS acquisition, the core commercial question is: “Will the existing customers keep paying, pay more, or leave?” Everything else — market size, competitive landscape, product roadmap — is contextual. Retention is the thesis.
That is why the data room is necessary but insufficient. The data room contains the financial artifacts of customer behavior: renewal rates, expansion revenue, churn cohorts, contract terms. What it cannot contain is the intent, satisfaction, and competitive awareness that will determine whether those metrics hold, improve, or deteriorate over the next 3-5 years.
Customer interviews fill that gap. Conducted independently — without the target company’s involvement or knowledge — they surface the forward-looking signals that financial data structurally cannot capture.
What Are the 6 SaaS-Specific Signals to Test in CDD?
Every SaaS CDD should test six signals. Each one maps to a specific risk or opportunity in the financial model. Together, they produce a customer-validated view of the target’s retention economics.
Signal 1: Net Revenue Retention Trajectory
What the data room tells you: The reported NRR — typically a trailing twelve-month figure, sometimes segmented by customer cohort or size tier.
What customer interviews tell you: Whether the NRR is stable, improving, or about to decline. NRR is a blended output of four underlying behaviors: renewal, expansion, contraction, and churn. Each behavior has its own drivers, and those drivers are not visible in the financial data.
A target might report 125% NRR. That number looks excellent. But customer interviews might reveal that 60% of the expansion revenue came from a single product add-on that most customers now consider fully deployed — meaning expansion will plateau. Or that a large cohort of customers renewed in the last 6 months but expressed significant dissatisfaction that will manifest as churn at the next renewal cycle.
The NRR number is backward-looking. The interview data is forward-looking. The delta between them is where bid risk lives.
What to listen for:
- Customers describing their usage as “mature” or “fully deployed” — signals expansion ceiling
- Recent renewals accompanied by complaints about the renewal process, pricing, or support — signals fragile retention
- Customers who expanded recently but describe the expansion as a one-time event rather than an ongoing trajectory
- Enthusiasm gap between long-tenured and recently acquired customers
Signal 2: Churn Risk by Segment
What the data room tells you: Aggregate churn rate, sometimes broken down by cohort or customer size. Possibly a list of recently churned accounts with stated reasons.
What customer interviews tell you: Which segments are at genuine risk of churning and why — before it shows up in the metrics.
Aggregate churn rates hide segment-level dynamics. A target with 5% annual logo churn might have 2% churn among enterprise accounts and 15% churn among SMB accounts. The enterprise retention looks healthy until you discover that three of the five largest enterprise contracts are up for renewal in the next 12 months and all three customers are evaluating competitors.
The most dangerous churn risk in SaaS is invisible churn — customers who are dissatisfied but have not yet acted. Multi-year contracts are the primary mechanism. A customer locked into a 3-year agreement may be deeply unhappy but will not appear in any churn metric until the contract expires. If 30% of the target’s ARR sits on multi-year contracts expiring in the next 18 months, and customer interviews reveal meaningful dissatisfaction in that cohort, the reported churn rate is not the real churn rate. It is the churn rate after you subtract the customers who want to leave but cannot yet.
For a deep dive into the methodology of churn analysis — including the quantitative frameworks and interview techniques — see the complete guide to churn analysis.
What to listen for:
- Customers on multi-year contracts expressing frustration with the product, support, or pricing
- Customers describing themselves as “stuck” or mentioning switching costs as the primary reason they stay
- Specific competitor mentions — not vague awareness but active evaluation
- Segment-level patterns: SMB customers churning for different reasons than enterprise customers
Signal 3: Expansion Willingness
What the data room tells you: Historical expansion revenue, often expressed as a component of NRR. May include upsell/cross-sell conversion rates and average expansion deal size.
What customer interviews tell you: Whether future expansion is probable, possible, or exhausted.
Expansion revenue is the engine of SaaS valuation premiums. A company growing its existing customer base at 20-30% annually commands a dramatically different multiple than one growing at 5%. But historical expansion data has a ceiling problem: past expansion does not guarantee future expansion.
Customer interviews test three dimensions of expansion willingness:
- Need. Do customers have unmet needs the product could address? Are they using workarounds or third-party tools for adjacent problems?
- Intent. Are customers actively planning to expand — adding seats, upgrading tiers, purchasing add-ons? Or is their current deployment “the right size”?
- Budget. Would expansion require new budget approval, or can it be absorbed within existing spend? Budget friction is the most common silent killer of expansion pipelines.
The most dangerous pattern is concentrated expansion — when the historical expansion revenue is driven by a small number of accounts. If 10 accounts generated 70% of last year’s expansion revenue, the forward-looking expansion opportunity is far narrower than the headline number suggests.
What to listen for:
- Customers describing their deployment as “right-sized” or “complete” — signals expansion ceiling
- Active plans to add seats, teams, or use cases — signals genuine expansion pipeline
- Budget constraints or approval processes that would slow expansion
- Customers who expanded once and view it as a one-time decision
Signal 4: Competitive Switching Intent
What the data room tells you: Management’s view of the competitive landscape, usually presented as a positioning matrix where the target occupies the upper-right quadrant.
What customer interviews tell you: Which competitors customers actually know about, have evaluated, and might switch to.
Management competitive assessments are systematically biased. The target company has every incentive to minimize competitive threats in a sale process. Customer interviews bypass this bias entirely by asking the people who actually make purchasing decisions how they perceive the competitive landscape.
The critical distinction is between competitive awareness and competitive intent. Many customers are aware of alternatives without seriously considering switching. That is healthy. The risk signal is customers who have actively evaluated alternatives, received demos, or requested pricing from competitors — even if they have not switched yet. That is not awareness. That is pipeline leakage.
In SaaS, competitive switching intent is particularly dangerous when it clusters by segment. If mid-market customers across the board mention the same competitor and the same switching trigger (usually price or a specific feature gap), that is not random noise. That is a market signal that the data room cannot capture.
What to listen for:
- Specific competitor names mentioned unprompted
- Customers who have received demos or pricing from alternatives in the last 12 months
- Switching triggers: specific features, pricing changes, or support failures that would cause a customer to actively shop
- Segment-level clustering of competitive mentions
Signal 5: Pricing Power
What the data room tells you: Current pricing, historical price increases, and their impact on churn and expansion. Possibly customer-level pricing data showing variance by segment or vintage.
What customer interviews tell you: Whether the target can raise prices post-acquisition without triggering churn — and how much headroom exists.
Pricing power is one of the most common value creation levers in PE-backed SaaS. The playbook is straightforward: acquire, raise prices 15-30%, and capture margin improvement. But pricing power is not universal. It depends on customer perception of value relative to cost, the availability of alternatives, and the switching costs that make price increases tolerable.
Customer interviews test pricing power directly. Not by asking “Would you pay more?” — which produces unreliable stated-preference data — but by probing the value-to-price relationship from multiple angles. How do customers perceive the ROI? What would they do if the price increased by 20%? What is the maximum they would pay before seriously evaluating alternatives? At what price point does the product become a commodity purchase subject to procurement pressure?
The red flag is customers who describe themselves as price-sensitive while currently paying below the target’s standard pricing. This cohort — often legacy customers grandfathered into old pricing — represents a concentration of churn risk if post-acquisition price normalization is part of the value creation plan.
What to listen for:
- Customers describing the product as “good value” versus “expensive but worth it” — the distinction reveals pricing headroom
- References to cheaper alternatives, even if not currently considering a switch
- Customers who negotiate aggressively at renewal — signals price sensitivity
- The “would you pay more” threshold: the specific price increase that triggers competitive evaluation
- Customers on legacy pricing who have not been exposed to current rates
Signal 6: Champion Dependency
What the data room tells you: Nothing. This is the signal with the largest gap between data room visibility and actual risk.
What customer interviews tell you: Whether the target’s retention depends on specific individuals at customer organizations rather than organizational-level value.
Champion dependency is the hidden structural risk in B2B SaaS. A product might have excellent NRR, low churn, and strong NPS — but all of those metrics depend on the continued employment and advocacy of specific champions within each customer account. When the VP who championed the purchase leaves, the account becomes an orphan. The new leader has no emotional attachment to the product, inherits a cost line they did not choose, and becomes immediately receptive to alternatives.
This risk is invisible in the data room because champion turnover has not happened yet. It surfaces only in customer interviews — specifically, by understanding how deeply the product is embedded in organizational workflows versus how dependent it is on a single decision-maker’s preference.
The worst version of champion dependency is concentrated champion dependency: when the target’s largest accounts are each dependent on a single senior advocate. If the top 10 accounts (representing perhaps 30-40% of ARR) each have a single champion, the revenue base is far more fragile than the retention metrics suggest.
What to listen for:
- Customers where one person drives all usage decisions and renewals
- Accounts where the original buyer has left and the current contact inherited the product
- Organizational embedding signals: is the product integrated into workflows used by multiple teams, or is it a tool used by one person or department?
- Customers who say “if [person] left, I’m not sure what would happen with the product”
What the Data Room Tells You vs. What Customer Interviews Tell You?
The following table summarizes the structural gap between data room evidence and customer interview evidence for SaaS CDD.
| Dimension | Data Room Evidence | Customer Interview Evidence |
|---|---|---|
| NRR / GRR | Trailing twelve-month calculation, cohort breakdowns | Whether the NRR is sustainable: forward-looking renewal intent, expansion plans, satisfaction behind the number |
| Churn | Historical churn rate, churned account list, stated reasons | Hidden churn risk: dissatisfied customers on multi-year contracts, segment-level vulnerability, specific competitive threats |
| Expansion | Historical expansion revenue, upsell conversion rates | Whether future expansion is probable: unmet needs, budget availability, deployment maturity, concentration risk |
| Competition | Management positioning matrix, win/loss data (if available) | Real competitive landscape from the customer’s perspective: who they have evaluated, switching triggers, segment-level patterns |
| Pricing | Current pricing, historical increases, price-to-churn correlation | Pricing headroom: perceived value-to-cost ratio, increase tolerance thresholds, legacy pricing risk |
| Champion risk | Nothing | Single-person dependency, organizational embedding depth, succession risk at key accounts |
| Customer satisfaction | NPS score, CSAT surveys | Depth of satisfaction: genuine enthusiasm vs. passive tolerance, satisfaction trajectory, unvoiced frustrations |
The pattern is consistent: the data room captures what has happened. Customer interviews capture what will happen. For a SaaS business where the entire revenue base re-decides every contract cycle, the forward-looking view is the one that determines whether you are overpaying.
How to Design a SaaS CDD Interview Guide
A SaaS CDD interview guide must accomplish six things in 25-35 minutes: establish usage context, test each of the six signals, probe below surface-level responses using laddering methodology, and collect comparable data across respondents for quantitative synthesis.
The structure follows a funnel: open-ended context questions first, thesis-specific probes second, depth laddering third, and structured quantitative questions last. This sequence prevents leading questions from contaminating early responses — a critical design principle when interview data will directly inform bid pricing.
For the complete interview template with scoring rubrics, see the commercial due diligence template.
Phase 1: Usage Context (3-5 minutes)
Establish the customer’s relationship with the product before probing specific signals. These questions also serve as segmentation data for analysis.
- How long have you been using [product]? What was the original reason for selecting it?
- How many people at your organization use the product today? Has that number changed in the last year?
- What is your role in the purchasing and renewal decision?
- How would you describe the product’s importance to your daily operations?
Phase 2: Signal-Specific Probes (15-20 minutes)
Each of the six signals gets a dedicated question cluster. The interviewer (or AI moderator) uses 5-7 level laddering to go deeper on any response that surfaces risk or opportunity.
NRR Trajectory Questions:
- When your contract comes up for renewal, what factors will influence your decision?
- Have you expanded your usage in the past 12 months? What drove that decision?
- Do you anticipate your spend with [product] increasing, staying the same, or decreasing over the next year? Why?
- Are there capabilities you currently get from [product] that you could get elsewhere? Which ones?
Churn Risk Questions:
- If you could go back to the moment before you purchased [product], knowing what you know now, would you make the same decision? What would be different?
- What would need to change about [product] for you to seriously consider an alternative?
- Have you experienced any frustrations in the last 12 months that made you reconsider the relationship?
- On a scale of 1-10, how likely are you to renew at your current contract terms? [Then ladder into the reasoning behind the number]
Expansion Willingness Questions:
- Are there problems adjacent to what [product] solves that you currently handle with other tools or manual processes?
- If [product] offered [specific upsell capability], would that be something your organization would evaluate? What would the budget conversation look like?
- What would need to be true for you to expand your usage beyond its current scope?
- How does your organization typically approve increases in software spend? What is the threshold that requires executive approval?
Competitive Switching Intent Questions:
- When you last evaluated your options in this category, what alternatives did you consider?
- Are you aware of [specific competitor]? Have you seen a demo or received pricing in the last 12 months?
- What would a competitor need to offer for you to seriously evaluate switching?
- If a competitor offered a migration path that eliminated switching costs, how would that change your calculation?
Pricing Power Questions:
- How do you perceive the value you get relative to what you pay for [product]?
- If the price increased by 15-20% at your next renewal, what would you do?
- At what price point would [product] no longer be a clear choice — where you would run a formal evaluation of alternatives?
- How does [product] pricing compare to what you pay for similar tools in adjacent categories?
Champion Dependency Questions:
- Who at your organization would be most affected if [product] were discontinued tomorrow?
- If the person who originally championed [product] left the organization, what would happen to the account?
- Is [product] integrated into workflows that multiple teams depend on, or is it primarily used by one team or individual?
- When was the last time someone at your organization questioned whether [product] was the right choice?
Phase 3: Structured Quantitative Capture (3-5 minutes)
Close with standardized questions that produce comparable data across all interviews. These inputs feed directly into the customer-validated NRR calculation.
- On a scale of 0-10, how likely are you to recommend [product] to a peer? [NPS]
- On a scale of 1-5, rate your overall satisfaction with [product].
- On a scale of 1-5, rate your likelihood of renewing at current terms.
- On a scale of 1-5, rate your likelihood of expanding usage in the next 12 months.
- On a scale of 1-5, rate the competitive strength of [product] versus alternatives you are aware of.
How to Calculate Customer-Validated NRR
This is the analytical framework that converts qualitative interview data into a quantitative metric comparable to the reported NRR. When the customer-validated NRR diverges materially from the reported NRR, it directly impacts the bid price.
Step 1: Segment the Interview Base
Map each interviewed customer to a revenue segment based on contract value. The segmentation must mirror the target’s revenue distribution so that interview-derived signals are revenue-weighted, not customer-count-weighted.
| Segment | Revenue Share | Interview Target | Weight |
|---|---|---|---|
| Enterprise (top 20% by ARR) | 55% | 15-25 interviews | 0.55 |
| Mid-market (middle 60%) | 35% | 25-40 interviews | 0.35 |
| SMB (bottom 20%) | 10% | 10-15 interviews | 0.10 |
Step 2: Score Each Customer on Four Dimensions
From the interview data, assign each customer a score on four dimensions using the structured quantitative capture and the qualitative context from the probing phase.
-
Renewal Probability (0-100%): Based on stated renewal intent, satisfaction depth, competitive awareness, and contract status. A customer who says “definitely renewing” but also describes active competitive evaluation gets a lower probability than one who says “definitely renewing” and describes deep organizational embedding.
-
Expansion Probability (0-100%): Based on stated expansion plans, unmet needs, budget availability, and deployment maturity. Weight genuine intent (approved budget, specific timeline) higher than aspirational statements.
-
Expected Expansion Magnitude (percentage of current spend): For customers with non-zero expansion probability, estimate the size of the expansion as a percentage of their current contract value.
-
Contraction Risk (0-100% probability, with estimated magnitude): Based on signals of dissatisfaction, feature underutilization, or organizational changes that would reduce usage.
Step 3: Calculate Segment-Level Validated NRR
For each segment, calculate the expected revenue retention:
Segment Validated NRR = (Sum of: Each customer’s current ARR x Renewal Probability x (1 + Expansion Probability x Expansion Magnitude - Contraction Risk x Contraction Magnitude)) / Total Segment ARR
Step 4: Blend to Company-Level Validated NRR
Weight each segment by its revenue share:
Customer-Validated NRR = (Enterprise Validated NRR x 0.55) + (Mid-Market Validated NRR x 0.35) + (SMB Validated NRR x 0.10)
Step 5: Compare to Reported NRR
The critical output is the delta between the customer-validated NRR and the reported NRR.
| Delta | Interpretation | Bid Impact |
|---|---|---|
| Within +/- 5 points | Reported NRR is credible | Underwrite at or near reported NRR |
| -5 to -15 points | Meaningful downside risk | Adjust revenue projections downward; model NRR convergence to validated level over 12-24 months |
| Greater than -15 points | Reported NRR is not sustainable | Material bid reduction or re-evaluation of deal thesis; model accelerated NRR decline |
| Greater than +5 points | Reported NRR understates health | Potential upside not in the base case; expansion opportunity larger than management represents |
A 10-point NRR gap on a $50M ARR business — say, reported NRR of 125% versus customer-validated NRR of 115% — represents a $5M annual revenue difference from the existing base alone. Compounded over a 5-year hold period, that gap is worth $25-40M in terminal value, depending on the exit multiple. That is the financial weight of customer interviews.
Red Flags That Surface Only in Customer Interviews
The following red flags are invisible in the data room. They produce healthy-looking metrics today while signaling deterioration that will materialize during the hold period.
Red Flag 1: Hidden Dissatisfaction Behind High NRR
The reported NRR is 120%. The customer interviews reveal that 35% of the customer base on multi-year contracts is dissatisfied and plans to churn or contract at renewal. The current NRR looks strong because these customers have not yet had the opportunity to leave.
How it appears in the data room: Strong NRR, low churn rate, high percentage of ARR on multi-year contracts (which management presents as a strength).
How it appears in customer interviews: Customers on multi-year contracts describe themselves as “locked in” rather than “committed.” They mention frustrations that have accumulated over the contract term. When asked about renewal intent, they hedge — “we will evaluate our options” rather than “we plan to renew.” Some have already received competitive demos.
Bid impact: Model the NRR as if multi-year contract customers retain at the interview-derived rate, not the historical rate. If 35% of multi-year ARR is at risk, the forward-looking NRR is materially lower than the trailing figure.
Red Flag 2: Concentrated Expansion Revenue
The target reports $8M in expansion revenue last year, contributing to a strong NRR. Customer interviews reveal that $5.5M of that expansion came from 6 accounts — large enterprise customers executing one-time deployments to new divisions. The remaining $2.5M was distributed across 200+ accounts.
How it appears in the data room: Healthy expansion revenue, strong NRR, growing average contract value.
How it appears in customer interviews: The 6 large expansion accounts describe the expansion as a completed project, not an ongoing growth trajectory. They are “fully deployed.” The broader customer base reports minimal expansion intent. Some describe their deployment as “right-sized.”
Bid impact: The sustainable expansion run rate is $2.5M, not $8M. Model the NRR with normalized expansion, not the anomalous year.
Red Flag 3: Inertia-Based Retention Masquerading as Loyalty
The target has 95% gross revenue retention and a high NPS score. Customer interviews reveal that a significant portion of the customer base stays because switching costs are high — data migration, workflow retraining, integration rebuilding — not because they are satisfied. They describe the product as “good enough” and the switching cost as “not worth the hassle.” Their NPS scores are 7s and 8s — passive, not promoter — but the aggregate NPS looks healthy because truly dissatisfied customers already churned.
How it appears in the data room: Strong GRR, decent NPS, low active churn.
How it appears in customer interviews: Customers using words like “adequate,” “does the job,” “we have invested too much to switch.” Limited enthusiasm. No advocacy. When asked what a competitor would need to offer to trigger a switch, the threshold is lower than expected — a free migration tool and a 20% discount would move a material portion of the base.
Bid impact: This retention is structurally vulnerable to any competitor that reduces switching costs. If a well-funded competitor launches a migration accelerator, the retention rate could deteriorate rapidly. The GRR should be stress-tested against a scenario where switching costs decline.
Red Flag 4: NPS That Masks Segment-Level Problems
The target reports an overall NPS of 45 — well above the SaaS median. But customer interviews segmented by customer size reveal that enterprise NPS is 62 while SMB NPS is 18. The aggregate number is pulled up by enthusiastic enterprise customers while SMB customers — who represent 40% of logo count — are at meaningful risk of churn.
How it appears in the data room: Healthy aggregate NPS, possibly with limited segmentation.
How it appears in customer interviews: Enterprise customers describe the product as essential and deeply integrated. SMB customers describe it as overpriced for their use case, with complexity they do not need and support that is oriented toward larger accounts. Two different products, two different experiences, one blended NPS.
Bid impact: If the investment thesis assumes growth in the SMB segment, the thesis is challenged. If it assumes enterprise focus, the SMB segment is a drag that may require strategic pruning or a differentiated product tier.
Designing the Sample Plan for SaaS CDD
The sample plan for SaaS CDD requires segmentation along dimensions that map to SaaS-specific retention dynamics. The standard CDD segmentation (size, tenure, satisfaction) is necessary but not sufficient.
SaaS-Specific Segmentation Dimensions
Contract type. Annual contracts, multi-year contracts, and month-to-month customers have fundamentally different retention dynamics. Multi-year customers may be dissatisfied but not yet visible as churn risk. Month-to-month customers are a real-time indicator of product-market fit.
Expansion history. Customers who have expanded are different from those at their original contract value. The former group provides evidence about expansion ceiling. The latter provides evidence about expansion barriers.
Renewal timing. Customers whose contracts expire in the next 6-12 months are the most valuable interviews for forward-looking churn prediction. They are the cohort whose behavior will first impact the NRR after acquisition.
Product tier. For targets with multiple product tiers or pricing plans, satisfaction and retention vary dramatically by tier. An enterprise product might be excellent while the mid-market offering is a competitive liability.
Recommended Sample Allocation
For a 75-interview SaaS CDD study:
| Segment | Count | Priority |
|---|---|---|
| Enterprise, multi-year contract, renewal in next 12 months | 10 | Highest — direct NRR risk assessment |
| Enterprise, annual contract, expanded in last 12 months | 8 | Expansion sustainability |
| Mid-market, annual contract, no expansion | 12 | Expansion barrier identification |
| Mid-market, any contract, renewal in next 6 months | 10 | Near-term churn risk |
| SMB, month-to-month | 10 | Real-time retention signal |
| SMB, annual contract | 8 | Segment-level satisfaction |
| Recently churned (last 6 months) | 10 | Ground truth on churn drivers |
| New customers (under 6 months) | 7 | Onboarding health, early satisfaction |
This allocation over-indexes on segments with the highest information value for retention prediction. Recently churned customers provide ground truth on churn drivers that current customers may not articulate. New customers provide a leading indicator of future retention — if recent onboarding cohorts are less satisfied than tenured customers, the retention rate is likely to deteriorate.
For the recruitment methodology and screening criteria, see the customer due diligence questions guide.
From Interview Data to IC Memo
The synthesis process converts 50-150 individual interviews into a structured narrative for the investment committee. The IC memo for a SaaS CDD should be organized around the six signals, with each signal section following the same format:
- Thesis assumption — what the financial model assumes about this dimension
- Customer evidence — what the interviews revealed, with segment-level analysis
- Validated metric — the customer-validated version of the reported metric
- Risk assessment — likelihood and severity of the identified risk
- Bid implication — how the finding should adjust the financial model
Example IC Memo Section: NRR Trajectory
Thesis assumption: Management reports 128% NRR and projects 125%+ sustained over the hold period.
Customer evidence: Enterprise customers (55% of ARR) show strong renewal intent and active expansion plans, validating NRR above 130% for this segment. Mid-market customers (35% of ARR) show weaker signals: 40% describe their deployment as “right-sized” with limited expansion intent, and 15% are actively evaluating a competitor that launched a comparable product at 30% lower pricing last quarter. SMB customers (10% of ARR) show the weakest signals: month-to-month customers report high price sensitivity and limited integration depth.
Customer-validated NRR: 118% (vs. reported 128%). The 10-point gap is driven primarily by mid-market expansion plateau and SMB churn risk. Enterprise segment validates above the reported figure.
Risk assessment: Medium-high. The mid-market competitive threat is specific and near-term. The SMB segment shows structural pricing vulnerability. Enterprise segment is sound.
Bid implication: Model NRR convergence from 128% to 118% over 18 months. Adjust terminal value assumptions accordingly. Consider deal structure that includes an earnout tied to NRR maintenance above 120%.
When to Run SaaS CDD
Customer interviews for SaaS CDD are most valuable at three points in the deal lifecycle:
Pre-LOI thesis check (30-50 interviews, 72 hours). A rapid study that tests the 2-3 highest-risk thesis assumptions before the deal team commits to exclusivity. At $20 per interview, this is a $600-$1,000 investment to de-risk a multi-million dollar capital allocation decision.
Full CDD post-LOI (75-150 interviews, 72-96 hours). The comprehensive study covering all six signals with segment-level depth. This is the study that produces the customer-validated NRR and the full risk matrix for the IC memo.
First 100 days post-close (50-75 interviews). Establishes the customer perception baseline at the start of the hold period. This baseline becomes the measurement anchor for every value creation initiative — pricing changes, product investments, go-to-market shifts — over the hold period.
For firms running SaaS CDD at scale across multiple portfolio companies, the commercial due diligence solution provides a standardized framework that produces comparable data across targets and over time.
The Bottom Line
SaaS valuations are retention-metric valuations. The multiple a buyer pays is a bet on the durability of NRR, GRR, and expansion revenue. The data room shows you the current state of those metrics. Customer interviews show you whether that state is stable, improving, or about to deteriorate.
The six signals — NRR trajectory, churn risk by segment, expansion willingness, competitive switching intent, pricing power, and champion dependency — are the forward-looking indicators that financial data cannot capture. Testing them through 50-150 independent customer interviews, completed in 72 hours, gives the deal team evidence that directly informs the bid price.
When the customer-validated NRR matches the reported NRR, you can underwrite with confidence. When they diverge, you have discovered something that every other bidder — relying on the same data room and the same management presentation — has missed. That asymmetric information is the edge.
For PE deal teams evaluating SaaS targets, the question is not whether customer interviews are worth the investment. At $20 per interview, a 75-interview study costs $1,500. The question is whether you can afford to bid on a SaaS company without knowing what the customers actually think.