← Insights & Guides · 13 min read

How to Run NPS/CSAT Follow-Ups with User Intuition

By

NPS and CSAT follow-up interviews on User Intuition are an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. A Customer Success leader, a CX manager, or a product marketing lead can launch a follow-up program in an afternoon, then watch transcripts, satisfaction-rated calls, and synthesized reports show up inside the Intelligence Hub within 24-48 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, which makes a continuous program after every survey wave financially viable.

For the broader methodology (how NPS and CSAT differ, what to ask each score band), see the methodology guide. For headline pricing and the comparison against Medallia, Qualtrics XM, and Delighted, see the User Intuition NPS and CSAT research platform page. This piece stays in the workflow lane: what to click, where to click it, and why each setting matters when the cohort is detractors, passives, or promoters from your most recent survey wave.

What are NPS/CSAT follow-ups, briefly?


An NPS/CSAT follow-up is a structured qualitative interview run on customers who have already given you a satisfaction score, designed to surface the reasoning behind the number. NPS and CSAT measure sentiment with a single rating. The follow-up explains it. It sits between two failed substitutes: the open-ended comment box, which captures one-sentence fragments from 10 to 15% of respondents, and a CS team manually calling a handful of detractors, which is anecdotal and never reaches passives or promoters at scale.

Done well, a follow-up program answers four questions every CX team needs answered every wave. What specific event drove each detractor score? Why are passives stuck in the middle, and what would push them to a 9 or pull them to a 4? What do promoters love (not what we think they love)? How have these drivers shifted since last wave?

The mechanism is depth. A 10 to 20 minute conversation with 5 to 7 levels of structured probing surfaces the event and the emotion behind a score that a survey instrument will never reach. Onboarding friction, support degradation, a single account-management interaction, a pricing change, a competitor pitch the customer just heard, these are the patterns underneath an aggregate number. They show up only in conversation.

AI-moderated platforms compress the timeline from weeks to 24-48 hours and the cost from a $25K-plus enterprise contract to $200 floor pricing. That changes the program design. Instead of one big annual driver study, CX teams can run a follow-up cohort after every NPS pulse, feed every transcript into a searchable Intelligence Hub, and watch satisfaction drivers compound wave by wave.

Why doing this on User Intuition is different


Most NPS/CSAT tooling makes you choose between measurement and explanation. Qualtrics, Medallia, Delighted, and AskNicely measure beautifully. None run structured 5-7 level probing on every detractor, passive, and promoter inside 48 hours. User Intuition is built for the qualitative layer.

The AI moderator runs 10-20 minute conversations and probes 5-7 levels deep on the event that drove the score, the emotional valence, and the change that would move it. Detractors get one discussion guide tuned for churn risk. Passives get a guide tuned for switching triggers. Promoters get a third tuned for advocacy barriers. Every conversation runs the same methodology, so a 10-respondent pilot and a 200-respondent quarterly study are directly comparable. Customers report higher candor with AI than with vendor-affiliated researchers, and 98% participant satisfaction reflects how the experience lands.

Recruitment matches how CX teams source respondents. CSV upload accepts an export from Qualtrics, Medallia, Delighted, or AskNicely. The CRM Contacts integration pulls Salesforce and HubSpot lists. The unique share link drops into a Zapier workflow the moment a survey closes. The 4M+ panel across 50+ languages handles lapsed customers who no longer respond to surveys. The most powerful pattern is the embed widget: drop it on your survey thank-you page and customers accept a 5-minute follow-up while the experience is still fresh.

Synthesis runs in the same product. The Reports tab generates an Executive Summary, Top Insights, Detailed Analysis grouped by score band, and Recommendations, all grounded in verbatim quotes. The Intelligence Hub layers across waves, so quarter-on-quarter patterns surface insights any single-wave report would miss. Pricing at $20/interview makes a continuous program sustainable. See the User Intuition NPS and CSAT solution page and the comparison of dedicated platforms for how this stacks up against survey tools.

The 8-step walkthrough


The product flow is the same for every study type. The choices below are tuned for NPS and CSAT follow-ups specifically.

Step 1: Create Study

Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. Pick the NPS and CSAT card. The selected card highlights with a dark background. Click Save & Continue.

The NPS and CSAT template pre-loads the conversation flow, screener defaults, and learning goals tuned for score-band follow-ups. The cleanest program design runs two studies in parallel: one detractor study and one promoter study, with passives bolted onto whichever side has bandwidth that wave. That parallel design lets you compare verbatim language across bands inside the Intelligence Hub without contaminating the moderator script.

NPS/CSAT-specific tip: if you only have time for one study this wave, run the detractor follow-up first. Detractor recovery has the highest revenue impact per interview, and the patterns you find will sharpen the promoter and passive scripts you write next wave. See Choose Your Study Type for the full template comparison.

Step 2: Customize Plan

The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines, all in a panel on the right.

For an NPS/CSAT follow-up, give Charles four things. First, the score type: NPS pulse or CSAT post-ticket. Second, the score band: detractors 0-6, passives 7-8, or promoters 9-10. Third, the segment: “Mid-market SaaS, North America, Pro plan, tenure 6-24 months.” Fourth, the business decision: “We are designing the Q3 retention plan and need to know what drove the drop from 42 to 36.” Charles returns a plan that asks about the precipitating event, the emotional reaction, the trigger, and the conditions that would move the score.

NPS/CSAT-specific tip: tell Charles what hypothesis you currently hold. He uses your hypotheses to design probes that test or break them, which keeps interviews from drifting into ground the survey already covered. Read the Customize Plan docs for the full set of context questions.

Step 3: Review Goals

The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q2 NPS Detractor Follow-Up, Mid-Market”; rename to fit your convention). Below that, every section is editable. Changes save automatically.

This is the step where most NPS programs go wrong. The default open-ended NPS question is “You scored a 4. Why?” That reaches the rationalization, not the event. Edit Key Sub-Questions to ask about the specific event: “What happened in the last week or month that drove your score?” Add a probe for emotional valence: “When you think about that moment, what did it feel like?” Add a probe for comparison: “Were you also evaluating any alternative at the time?” Add a probe for the change: “What would have to be true for you to give us a 9 next wave?” Those four edits separate a follow-up that surfaces drivers from one that just collects feature feedback.

NPS/CSAT-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Twelve total questions or fewer keeps completion above 70% on detractor cohorts, where dropout is highest. The Review Study docs cover the editing toolbar reference. Click Save & Continue when the plan reads the way you want.

Step 4: Setup Interview

The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male) and Clara (Female). Click the play button on each card to preview the voice. Pick the one whose tone fits your customer audience; click the card to select it. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for NPS and CSAT follow-ups because voice captures the emotional intensity behind a score (the difference between a flat 6 and an angry 6) that chat will not.

Below the format toggles is the screener questions section. Add the highest-leverage NPS/CSAT screener: “What score did you give us in our most recent survey?” Anyone whose answer does not match the band you are studying drops out before consuming a credit. This is especially important when you import a mixed respondent list and want to keep the detractor study clean.

NPS/CSAT-specific tip: pick Elliot or Clara to fit the audience and the topic. Detractor interviews are higher-emotion conversations, and a calm, even voice (the default tone for both Elliot and Clara) helps respondents articulate frustration without escalating it. Read the Setup Interviewer docs for the full mode-selection guide.

Step 5: Test Conversation

The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, introduction, research questions, AI probing on your answers, and wrap-up. Test conversations do not count against your usage limits. A feedback screen asks “How was the conversation?” with Good or Needs Work options.

For NPS/CSAT, run two test conversations. Run the first as a detractor whose score was driven by a single bad onboarding experience, and listen for whether the AI probes the EVENT (what happened, when, who was involved) and the EMOTION (how it felt) rather than collecting feature complaints. Run the second as a promoter who scored a 9 because their CSM is responsive, and listen for whether the AI probes whether they have already referred you, what would make them refer more, and what advocacy barriers exist. NPS follow-ups go shallow when moderators stop at the first explanation.

NPS/CSAT-specific tip: if the AI accepts a surface answer, go back to Step 3 and tighten the Interviewer Guidelines: “Probe at least three levels deep when participants cite product quality. Ask about the specific moment, not the general impression.” Then re-test. The Test Conversation docs cover what to look for.

Step 6: Launch

The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review checklist sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click the Save and Launch button (with the rocket icon). Your study goes live immediately. You are redirected to the Study Dashboard.

The detail CX teams miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step.

NPS/CSAT-specific tip: for continuous programs, set the launch cadence to follow each survey wave (weekly transactional CSAT, biweekly or monthly relationship NPS). Mid-week launches see the highest completion rates, so a Wednesday or Thursday morning launch in the respondent’s timezone gives you 2 to 3 strong response days before the weekend. The Launch docs cover everything that can and cannot be changed post-launch.

Step 7: Invite Participants

This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to NPS/CSAT recruitment patterns.

Invite Your Customers lets you import contacts manually, via CSV, or directly from a synced CRM (Salesforce, HubSpot). This is how most teams recruit follow-up cohorts: export the score band from Qualtrics, Medallia, Delighted, or AskNicely, then upload the CSV. Research Panel opens recruitment from the 4M+ panel across 50+ languages, with country and language selection happening here. Use this for lapsed customers who no longer respond to surveys. Share Link generates a unique URL that drops into a Zapier workflow that fires the moment a survey closes, into a CSM email, or into a Stripe receipt. Embed Widget drops always-on follow-up collection into your survey thank-you page: customer scores, immediately accepts a 5-minute follow-up while context is fresh. This is the highest-converting pattern for NPS and CSAT follow-ups by a wide margin.

NPS/CSAT-specific tip: combine methods. Embed widget on the survey thank-you page for fresh-context interviews, CSV upload from Qualtrics or Medallia for wave-by-wave back-fill, panel recruitment for lapsed customers. The Recruiting overview covers all four methods side by side.

Step 8: Review Insights

Once interviews complete (first ones inside hours, full cohorts within 24-48 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row for the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis grouped by score band with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; Regenerate Report as more land.

The third surface is the Intelligence Hub, in the left navigation. Each Hub session is a workspace where you sync studies, upload supplementary files (PDFs, CSVs from Qualtrics or Medallia exports), and ask natural-language questions. For a continuous program, this is where the compounding intelligence lives. Detractor reasons, passive reasons, and promoter drivers are clustered automatically, so a CX leader can see WHY scores moved between waves, not just THAT they moved.

NPS/CSAT-specific tip: create a single Intelligence Hub session called “NPS Follow-Up” or “CSAT Follow-Up” at the start of the year, sync every wave’s study into it, and keep adding sources. Then ask questions like “How have detractor reasons shifted across the last four waves?” Pair it with the Reports module to ship a monthly NPS narrative to leadership. The Intelligence Hub docs and the Reports docs cover the full output surface.

Mini case study


A mid-market B2B SaaS company ran the workflow on 38 detractors after their Q2 NPS pulse, where relationship NPS had dropped from 42 to 36 wave over wave. The CX team set up the study in a single afternoon: NPS and CSAT template at Step 1, detractor band briefed to Charles at Step 2, goals tightened at Step 3 to ask about the specific event in the last 30 days, Audio mode and the score-confirmation screener at Step 4, two test conversations at Step 5, mid-week launch at Step 6.

Recruitment used a CSV export from Qualtrics filtered to 0-6 scores. The embed widget was also dropped on the survey thank-you page going forward. All 38 interviews completed within 48 hours.

The Reports tab and the Intelligence Hub surfaced a finding that contradicted the team’s working hypothesis. They had assumed pricing was driving detraction, since a price change landed two weeks before the wave. Pricing came up as a primary driver in only 6 of 38 conversations. The dominant theme, in 22 of 38 transcripts, was support response time degradation: a once-2-hour first-response SLA had drifted to 8-12 hours, and the price change had simply made the degradation feel less acceptable. The team rebuilt the support staffing model and committed publicly to a 4-hour first-response SLA. The next NPS pulse came in at 41, a recovery that pricing optimization alone would never have produced.

“Our NPS was stable at 38 for three quarters. The follow-up interviews revealed passives were satisfied but not loyal, one competitor pitch away from churning. That insight drove our entire Q3 retention strategy.”

Eric O., COO, RudderStack

What gets the best results from NPS/CSAT follow-ups on User Intuition?


Tips for getting the most out of the workflow, gathered from patterns that consistently produce score-band movement.

Probe the event, not the rationalization. Most NPS programs ask “Why did you score that?” and stop at the first answer. The highest-leverage edit at Step 3 is forcing the moderator to anchor on a specific event in the last 30 days, then probe the emotion attached. That single change separates a follow-up that produces drivers from one that produces feature requests.

Run two test conversations every time. One as a detractor whose score was driven by a single bad event, one as a promoter who has never referred you. The highest-leverage 20 minutes you can spend before launch is catching shallow probing or tone misalignment while it is cheap to fix.

Pick voice over chat. Voice transcripts surface the emotional intensity behind a score that chat compresses out. The difference between a flat 6 and an angry 6 is the entire signal. Reserve chat for international cohorts where typing is the better experience.

Run the embed widget on your survey thank-you page. Fresh-context follow-ups within 5 minutes of scoring produce richer transcripts than back-fill interviews scheduled days later. Recall fidelity is significantly higher.

Run continuous, not annual. 30-50 follow-up interviews per wave at $20/interview compounds into a much sharper picture than one 200-respondent annual driver study. See the comparison of dedicated NPS and CSAT platforms and the AI NPS follow-up interview deep dive for the budgeting case behind the per-wave cadence.

Sync the Intelligence Hub once per wave. Every completed study should land in the same NPS or CSAT session on a regular cadence. Cross-wave pattern recognition is the highest-leverage output, and it only works if every study lands in the same session. This is the same compounding-intelligence logic that powers continuous churn analysis programs.

Route insights inside 7 days. Score-band findings degrade fast. As soon as the report and the Intelligence Hub are populated, route findings to specific owners (Customer Success for detractors, Product for passives, Marketing for promoter advocacy) with a defined SLA. The programs that move NPS or CSAT are the ones whose findings reach a decision-maker the same week.

FAQ


The frontmatter FAQ block above answers the most common questions inline.

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

Setup takes 10 to 20 minutes from Create Study to Launch if you have your respondent list and a clear score band in mind. Charles, the AI researcher inside User Intuition, drafts the objective, background, and learning goals from a short brief. Most CX teams spend more time exporting the respondent list from Qualtrics or Medallia than configuring the study itself.
No. A Customer Success leader, CX manager, or product marketer can run a full follow-up program without a dedicated researcher. The AI runs the moderation, the synthesis, and the report. Your job is to brief Charles on the score band and segment, edit the learning goals, and review the Intelligence Hub once interviews complete.
Yes. The Invite Participants step accepts CSV uploads, manual email entry, CRM contact imports from Salesforce or HubSpot, and unique share links you can paste into a Zapier flow from Qualtrics or Medallia. The embed widget on the survey thank-you page is the fastest pattern: customers score, then accept a 5-minute follow-up while context is fresh.
Voice for almost every NPS/CSAT follow-up. Voice transcripts capture the emotional intensity behind a score (the difference between a flat 6 and an angry 6) that chat compresses out. Chat is the right call only for international respondents in non-primary languages or for accessibility-sensitive participants. Audio is the recommended default in User Intuition's setup screen.
Thirty to fifty per score band (detractors, passives, promoters) is where segment-level driver analysis becomes defensible. Fifty to one hundred total respondents across all bands typically surfaces the major satisfaction drivers. The 4M+ research panel is rarely needed here because your survey already produced the cohort, but it covers cases where you want to interview lapsed customers who no longer respond to surveys.
Screener questions sit inside the Setup Interview step and run before the main interview begins. For follow-ups, the highest-leverage screener is score confirmation: 'What score did you give us in our most recent NPS or CSAT survey?' Anyone who cannot confirm the score drops out before consuming a credit, which keeps the cohort tight when you import a mixed list.
Yes. User Intuition supports 50+ languages, and the country and language selection happens at the Invite Participants step when you build a research panel. For a multi-region NPS or CSAT program, you can run parallel English, Spanish, French, German, and Japanese cohorts inside a single study and compare verbatim drivers side by side in the Intelligence Hub.
On the Professional plan, audio interviews run at $20/interview, the marketing headline rate. Studies start at $200 (20 interviews), with no monthly minimums and no contract. A 100-interview program across detractors, passives, and promoters runs roughly $2,000, well below the $25K to $100K you would pay an enterprise survey platform for the same qualitative layer.
Survey platforms collect open-ended comments. They do not run 5-7 level structured probing on the reasoning behind a score. The User Intuition workflow is a complement, not a replacement: keep Qualtrics or Medallia for measurement, send the respondent list into User Intuition, and get a 24-48 hour qualitative layer that explains every score band. Most CX teams keep both.
Three places inside User Intuition. The Calls tab holds every transcript and audio recording with quality ratings. The Reports tab holds the AI-synthesized Executive Summary, Top Insights, Detailed Analysis, and Recommendations grouped by score band. The Intelligence Hub aggregates studies over time so a CX leader can ask cross-wave questions like 'How have detractor reasons shifted across the last four NPS pulses?'
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours