← Insights & Guides · 10 min read

NPS Driver Analysis: What Qualitative Interviews Reveal

By Kevin, Founder & CEO

You’ve run the regression. Product quality, support satisfaction, and ease of use are your top NPS drivers. The R-squared looks solid. The executive summary is ready.

But here’s the question the statistics can’t answer: why?

Why does product quality drive NPS for some segments but not others? What specifically about support satisfaction matters — speed, accuracy, empathy, or proactivity? And what about the drivers that aren’t in your model at all because you never thought to ask about them?

Statistical NPS driver analysis is valuable. It identifies patterns across your full respondent base with mathematical precision. But it has a fundamental limitation: it can only find correlations among variables you already measured. It cannot discover what it wasn’t designed to look for.

Qualitative follow-up interviews fill this gap. They ask customers directly what drives their score, surfacing causal relationships and unknown unknowns that no statistical model can detect.

This guide covers how to build a driver analysis program that combines both approaches — quantitative patterns with qualitative depth.

What Is NPS Driver Analysis?


NPS driver analysis answers the question: which factors most influence whether a customer gives you a high or low NPS score?

If your NPS is 35, driver analysis tells you what’s pushing it up and what’s dragging it down. It identifies the specific levers — product performance, support quality, pricing fairness, onboarding experience, relationship strength — that have the greatest impact on how customers rate you.

Without driver analysis, improving NPS is guesswork. You invest in product features because engineering is excited about them, improve support because ticket volume seems high, or adjust pricing because a competitor moved. Driver analysis replaces assumptions with evidence, directing resources toward the changes that will actually move your score.

The challenge is that most organizations stop at the statistical layer — and that layer, while powerful, is incomplete.

The Two Approaches to NPS Driver Analysis


Quantitative: Statistical Pattern Detection

The standard approach uses statistical methods to identify which survey attributes correlate most strongly with NPS scores. Common techniques include:

Key Driver Analysis (KDA): Combines correlation strength with attribute performance to identify priority areas. Attributes that are highly correlated with NPS but have low satisfaction scores are your top priorities.

Multiple Regression Analysis: Models NPS as a function of multiple satisfaction attributes simultaneously, identifying the independent contribution of each driver while controlling for others.

Structural Equation Modeling (SEM): Maps complex relationships between drivers, including mediating and moderating effects. Shows how drivers influence each other, not just how they influence NPS.

Relative Importance Analysis: Decomposes the total explained variance in NPS across all drivers, accounting for multicollinearity between predictors.

These methods are rigorous, scalable, and produce defensible results. They’re excellent at answering the question: “Among the things we measured, which matter most?”

Qualitative: Direct Causal Exploration

The qualitative approach uses follow-up interviews to ask customers directly what drives their score. Rather than inferring drivers from survey correlations, interviews let customers explain their own reasoning.

An AI-moderated interview might unfold like this:

AI: You gave us a 6. Can you walk me through what was on your mind when you chose that score?

Customer: Honestly, the product is good. But we’ve had three implementation delays in the last six months, and each one set our internal timeline back by weeks. My team has lost confidence that your timelines are reliable.

AI: So the implementation delays are the primary factor in your score?

Customer: It’s not just the delays. It’s that nobody proactively told us about them. We found out when we checked in and asked for status updates. That’s what really frustrated us — the lack of communication.

In three exchanges, you’ve learned something that no statistical model could surface: the root driver isn’t implementation speed (which you might measure in a survey) but proactive communication about delays (which you almost certainly don’t measure).

Why Statistical Analysis Alone Is Insufficient?


Statistical NPS driver analysis is a necessary foundation. But it has four structural limitations that qualitative interviews address.

Limitation 1: Correlation Does Not Equal Causation

This is the most fundamental gap. Your regression shows that customers who rate “support satisfaction” highly also give higher NPS scores. But does great support cause high NPS? Or do already-satisfied customers rate everything higher?

The correlation could run in multiple directions:

  • Great support actually drives loyalty (causal)
  • Happy customers perceive support more favorably regardless of quality (reverse causation)
  • A third factor (strong onboarding) produces both high support satisfaction and high NPS (confounding)

Statistical methods alone cannot distinguish between these explanations. Interviews can, because customers explain their own causal reasoning.

Limitation 2: You Can Only Find What You Measured

Statistical driver analysis operates exclusively within the variables included in your survey. If your survey measures product satisfaction, support satisfaction, pricing satisfaction, and onboarding satisfaction, your driver analysis will rank those four variables.

But what if the real driver is something you never asked about?

In practice, the most impactful drivers frequently fall outside standard survey frameworks:

  • Relationship quality: How well the customer feels known and understood by their account team
  • Strategic alignment: Whether the customer believes your product roadmap matches their future needs
  • Internal advocacy burden: How hard the customer has to work to justify your product internally
  • Competitive awareness: Whether the customer is seeing compelling alternatives in the market
  • Trust in leadership: Whether the customer believes in your company’s long-term viability

These drivers rarely appear in satisfaction surveys, but they frequently emerge in follow-up interviews as the primary factors behind a customer’s score.

Limitation 3: Aggregated Patterns Miss Segment-Level Nuance

A regression model trained on all respondents produces aggregate driver weights. But drivers vary significantly by segment:

Enterprise customers may be driven primarily by dedicated support, integration flexibility, and security compliance — factors that barely register for SMBs.

New customers weight onboarding experience and time-to-value heavily. Tenured customers care more about product roadmap and long-term strategic fit.

Different industries have different driver profiles. A healthcare customer’s NPS is influenced by HIPAA compliance in a way that a media company’s never will be.

You can run segment-level regressions, but you need sufficient sample sizes for each segment, and you still face the correlation and unknown-unknowns problems within each segment.

Interviews naturally capture segment-level nuance because each conversation reflects the individual customer’s specific context and priorities.

Limitation 4: Emotional and Relational Drivers Are Invisible to Surveys

Some of the most powerful NPS drivers are emotional and relational, and they resist quantification on a survey scale:

  • The feeling that your vendor truly understands your business
  • Trust that your account team will go to bat for you internally
  • Confidence that problems will be handled proactively, not reactively
  • The sense of being a valued customer rather than a revenue line item

These drivers are real, they influence scores, and they emerge clearly in conversational interviews. They don’t translate well to 1-5 satisfaction scales.

How Qualitative Interviews Complement Statistical Analysis


The strongest driver analysis programs don’t choose between quantitative and qualitative — they layer them. Each approach addresses the other’s blind spots.

Interviews Explain the WHY Behind Statistical Correlations

Your regression shows that “product reliability” is a top driver. But what does “reliability” mean to customers? Is it uptime? Data accuracy? Consistent behavior across updates? Performance under load?

Interviews decompose statistical drivers into their constituent parts, revealing what specifically needs to improve. “Improve product reliability” is a vague directive. “Reduce data sync latency that causes stale dashboard reports during peak hours” is an engineering ticket.

Interviews Discover Drivers You Never Measured

This is perhaps the most valuable contribution of qualitative analysis. When you ask customers to explain their score in an open-ended conversation, they tell you about factors you never thought to include in your survey.

Across thousands of NPS follow-up interviews, the most common unmeasured drivers that surface include:

  • Vendor fatigue: Customers managing too many tools who aren’t dissatisfied with any single one but are overwhelmed by the total stack
  • Champion dependency: The customer’s experience depends entirely on one knowledgeable internal user, creating fragility
  • Implementation debt: Problems introduced during initial setup that were never properly resolved and compound over time
  • Perception of innovation: Whether the customer believes the vendor is advancing or coasting
  • Peer influence: What customers hear about you from colleagues, analysts, and their professional networks

None of these appear in a standard satisfaction survey. All of them significantly influence NPS scores. Interviews are the only systematic way to discover them.

Interviews Reveal How Drivers Interact and Compound

Statistical models assume drivers are independent (or use interaction terms to model specific combinations). In reality, drivers interact in complex ways that customers can articulate but models struggle to capture.

A customer might explain: “Your product is excellent, and your support is fast. But when I have a complex issue that requires both product knowledge and creative problem-solving, neither your documentation nor your front-line support can help. I need to escalate to engineering, and that takes two weeks. It’s the intersection of product complexity and support depth that drives my score.”

This interaction between product sophistication and support expertise wouldn’t appear as a standard survey attribute, and the statistical interaction would be difficult to identify without prior hypotheses.

The Driver Analysis Framework: Score, Interview, Cluster, Prioritize, Track


Here’s a practical framework for integrating quantitative and qualitative driver analysis into a continuous program.

Step 1: Score (Quantitative Foundation)

Run your NPS survey and conduct standard statistical driver analysis on the results. This gives you a baseline: here are the measured attributes, here’s how strongly each correlates with NPS, here are your priority quadrants.

Treat these results as hypotheses, not conclusions. The statistics tell you where to look; the interviews tell you what you’re looking at.

Step 2: Interview (Qualitative Depth)

Within 48-72 hours of your survey closing, conduct AI-moderated follow-up interviews with respondents across all score bands. Interview design should:

  • Test statistical hypotheses: “Our data shows support is a top driver. Tell me about your support experiences and how they influence your overall perception.”
  • Explore unmeasured territory: “Beyond the topics we covered in the survey, what else influences how you feel about working with us?”
  • Probe causal mechanisms: “You mentioned product reliability. Walk me through a specific situation where reliability affected your experience.”
  • Capture segment context: “As a [enterprise/SMB] customer in [industry], what matters most to you that might be different from other customers?”

The AI moderator follows the customer’s narrative, probing deeper on each thread rather than forcing a rigid question sequence. This conversational approach surfaces drivers that structured surveys miss.

Step 3: Cluster (Pattern Identification)

Analyze interview transcripts to identify recurring themes and cluster them into driver categories. This typically produces:

  • Confirmed drivers: Factors that appeared in both statistical and qualitative analysis (your most robust findings)
  • Qualified drivers: Statistical drivers whose causal mechanism is now understood (you know WHY they matter)
  • Discovered drivers: New factors surfaced by interviews that weren’t in your survey (your biggest opportunities)
  • Qualified non-drivers: Survey attributes that statistically correlate but aren’t actually causal according to customers (avoid investing here)

Step 4: Prioritize (Strategic Allocation)

Rank drivers by a combination of:

  • Impact: How strongly does this driver influence NPS based on both statistical and qualitative evidence?
  • Prevalence: What percentage of customers mention this driver in interviews?
  • Segment concentration: Does this driver matter more for high-value segments?
  • Feasibility: How addressable is this driver within your current resources and roadmap?
  • Trajectory: Is this driver becoming more or less important over time?

The output is a prioritized investment roadmap: here’s where to put resources to have the greatest impact on NPS and retention.

Step 5: Track (Longitudinal Measurement)

Repeat the full framework quarterly. This builds a longitudinal view of how drivers evolve:

  • Are your product investments actually shifting the drivers that matter?
  • Are new drivers emerging as competitive or market conditions change?
  • Are segment-level driver profiles converging or diverging?
  • Are confirmed drivers strengthening or weakening over time?

Tracking driver changes is more valuable than tracking NPS changes because it tells you WHY your score is moving, not just that it moved.

Segment-Level Driver Analysis


One of the most actionable outputs of qualitative driver analysis is segment-level driver mapping. Here’s what this reveals in practice:

Enterprise vs. SMB

Enterprise customers typically rank integration depth, security compliance, dedicated support, and executive sponsorship as top drivers. Their NPS is influenced by whether your product plays well in a complex, multi-vendor technology stack and whether they have a named human who understands their account.

SMB customers typically rank ease of use, time-to-value, pricing transparency, and self-service capability as top drivers. Their NPS is influenced by whether they can succeed with your product without specialized expertise or expensive implementation services.

Running a single aggregate driver model across both segments produces misleading priorities. Qualitative interviews naturally capture these differences because each customer explains their own context.

New vs. Tenured Customers

New customers (first 6-12 months) are heavily influenced by onboarding quality, first-impression experiences, and whether they’ve achieved initial time-to-value. Their NPS drivers are concentrated around “did this product deliver what was promised during the sales process?”

Tenured customers (2+ years) are influenced by product evolution, roadmap alignment, and long-term relationship quality. Their NPS drivers shift toward “is this product keeping pace with my growing needs, and does this vendor still invest in me?”

By Geography or Market

Customers in different markets may have different driver profiles based on competitive landscapes, cultural expectations, and regulatory environments. Qualitative interviews capture these nuances where surveys with standardized attributes cannot.

Getting Started With Qualitative NPS Driver Analysis


Statistical driver analysis shows you correlations. Qualitative interviews reveal causation. The combination gives you a driver analysis program that’s both rigorous and actionable.

User Intuition’s NPS and CSAT solution conducts AI-moderated follow-up interviews with your NPS respondents across all score bands within 48-72 hours of your survey closing. At $20 per interview, a comprehensive driver analysis study of 100 respondents costs $2,000 and produces:

  • Causal driver identification with customer-verbatim evidence
  • Segment-level driver mapping (by customer type, tenure, industry)
  • Discovered drivers that weren’t in your survey
  • Prioritized improvement recommendations

Your statistical model tells you what correlates. Your customers will tell you what causes.

Launch your qualitative driver analysis.

Frequently Asked Questions

NPS driver analysis identifies which factors — product quality, support responsiveness, pricing, onboarding experience, etc. — most influence whether customers give high or low NPS scores. The goal is to understand what drives satisfaction so you can prioritize improvements that will actually move your NPS.
Quantitative driver analysis uses statistical methods (regression, correlation, structural equation modeling) to identify which survey attributes correlate most strongly with NPS scores. Qualitative driver analysis uses follow-up interviews to ask customers directly what drives their score. Quantitative shows correlations; qualitative reveals causation.
Statistical analysis can only find correlations among variables you measured. It can't discover drivers you didn't include in your survey (unknown unknowns), can't distinguish correlation from causation, and aggregates patterns that may not apply to specific segments. A support satisfaction correlation doesn't tell you whether slow responses or unhelpful answers drive the relationship.
For company-wide driver identification, 50-100 interviews across all score bands provide sufficient thematic saturation. For segment-level driver analysis (enterprise vs SMB, by industry, by tenure), aim for 20-30 per segment. At $20 per AI-moderated interview, a comprehensive study costs $1,000-$2,000.
Start with 'Walk me through why you chose that score.' Then probe with driver-specific questions: 'What's the single biggest factor in your rating?' 'What would change your score?' 'How do different aspects of our service — product, support, pricing, onboarding — weigh in your overall impression?' The AI moderator follows the customer's narrative rather than forcing a predefined list.
Run your statistical analysis first to identify correlations and hypotheses. Then design follow-up interviews that test those hypotheses and probe for undiscovered drivers. The interviews explain WHY the statistical correlations exist and surface drivers your survey never measured. Feed qualitative findings back into your next survey design.
Absolutely, and this is one of the biggest limitations of aggregate statistical analysis. Enterprise customers may be driven primarily by integration depth and dedicated support, while SMBs care most about ease of use and pricing. New customers weight onboarding heavily; tenured customers weight product roadmap. Qualitative interviews naturally capture these segment-level differences.
Run a full driver analysis quarterly, aligned with your NPS survey cadence. Drivers shift as your product evolves, competitive landscape changes, and customer expectations adjust. Quarterly analysis lets you track whether product investments are actually changing the drivers that matter most.
The ROI comes from better resource allocation. Without driver analysis, product and CX investments are based on assumptions. With driver analysis, you invest in the changes that will have the highest impact on scores and retention. Teams using qualitative driver analysis report more efficient improvement programs because they fix root causes rather than symptoms.
User Intuition conducts AI-moderated follow-up interviews with NPS respondents across all score bands within 48-72 hours of your survey closing. The platform delivers structured driver analysis including causal driver identification, segment-level driver mapping, and quarter-over-quarter driver tracking through the Intelligence Hub.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours