The Survey Problem in Due Diligence
Surveys produce two types of misleading data in CDD contexts:
1. False precision
A survey might report: “87% of customers rate satisfaction 7 or above on a 10-point scale.” This sounds precise and positive. But it tells you nothing about:
- Whether that 7 means “genuinely satisfied” or “not unhappy enough to switch yet”
- What would move a 7 to a 3 (one competitor launch, one price increase, one support failure)
- Whether the 13% below 7 are the company’s largest customers
- Why satisfaction is at the level it is
IC members who see “87% satisfaction” may assume the retention thesis is validated. An AI-moderated interview with those same customers would reveal that half the 7s are conditional — “satisfied as long as the price doesn’t increase” or “satisfied but watching what [competitor] does next.”
2. Social desirability bias amplified
Surveys trigger social desirability bias without any mechanism to probe beyond it. Customers default to positive responses because negativity requires more cognitive effort and feels uncomfortable, even in anonymous surveys. AI-moderated interviews create conversational space where customers naturally elaborate, qualify, and reveal complexity that survey responses flatten.
The AI Interview Advantage for CDD
Adaptive probing
AI moderation adapts to each response. When a customer mentions a competitor, the AI probes deeper. When a customer expresses ambivalence, the AI explores the conditions. Surveys follow fixed paths regardless of response content.
Contextual understanding
“We are satisfied” in a survey is a data point. “We are satisfied, but we only use 30% of the features and our team has been asking about [competitor] since they released their new platform” in an interview is intelligence. The context transforms the meaning entirely.
Inconsistency detection
Customers often express conflicting sentiments — “I would recommend this product” followed by “I am evaluating two alternatives.” Surveys treat these as independent data points. AI moderation detects the inconsistency and probes it, revealing that the recommendation is habitual while the competitive evaluation is active and deliberate.
IC-ready evidence
Survey output: “NPS is 42. 87% satisfaction. 78% plan to renew.”
Interview output: “78% of 150 customers report strong renewal intent. The at-risk 22% concentrate in mid-market accounts where pricing sensitivity is high. Three customers in the top-10 by ARR are actively evaluating [specific competitor]. Churn risk is fixable through segment-specific pricing but structural without it. Model impact: adjust mid-market churn from 8% to 14%.”
The second version drives investment decisions. The first provides false comfort.
When Surveys Might Supplement Interviews
Surveys have a role in CDD as a supplement to interviews, not a replacement:
- Pre-interview screening: Use a short survey to identify which customers to interview in depth
- Quantitative validation: After interviews identify themes, a survey can quantify prevalence across a larger sample
- Longitudinal tracking: Simple pulse surveys between quarterly interview studies can detect rapid shifts
But for the primary CDD evidence that goes into IC memos, AI-moderated interviews are the appropriate methodology. The depth, adaptivity, and IC credibility of interview evidence is not achievable through survey instruments.
For the complete AI-moderated CDD methodology, see AI Commercial Due Diligence. For structuring interview evidence for IC presentations, see Presenting CDD Findings to Investment Committee.