Churn analysis on User Intuition is an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. The product is built so a VP of Customer Success, a Director of Retention, or a RevOps lead can launch a churned-customer interview program in an afternoon, then watch transcripts, satisfaction-rated calls, and synthesized reports show up inside the Intelligence Hub within 48-72 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, which means a continuous monthly program is financially viable even at $20M to $80M ARR. This tutorial walks through each of the eight steps with the verbatim labels you will see in the product, the choices that matter most for churn analysis specifically, and the links to the underlying documentation.
For the broader methodology (how to design a program, what questions to ask, how to route insights), see the methodology guide. For the headline pricing, the proof points, and the comparison against Gainsight, ChurnZero, and exit surveys, see the User Intuition churn analysis software page. This piece stays in the workflow lane: what to click, where to click it, and why each setting matters when the cohort you care about is customers who have just cancelled.
What is churn analysis, briefly?
Churn analysis is the systematic practice of interviewing customers who have cancelled to understand the real reasons behind their decision, not the checkbox reason they selected on the way out. It sits between two failed substitutes: exit surveys, which collect a 5-15% response rate of “price” or “missing features,” and CSM-led exit calls, which top out at 10-20 conversations per quarter and carry the bias of the rep who owned the account. Neither produces continuous intelligence.
Done well, churn analysis answers four questions every quarter. What triggered the evaluation to leave? What alternative did the customer move to, or did they go back to manual? Where in the lifecycle did trust break (week two, month three, month six)? And which drivers are fixable by CS, which by product, and which are structural?
The mechanism is depth. A 30-minute conversation with 5 to 7 levels of structured laddering surfaces decision drivers that a 2-minute exit survey will never reach. Onboarding abandonment, value erosion at the renewal moment, champion turnover, integration gaps, the specific support ticket that broke confidence, these are the patterns underneath the price excuse. They show up only in conversation.
Modern AI-moderated platforms compress the timeline from weeks to 48-72 hours and the cost from $45K+ per study to $200 floor pricing. Mid-market retention teams can run 15 to 25 churn interviews every month, feed every transcript into a searchable Intelligence Hub, and watch retention intelligence compound cohort by cohort. The workflow below is built around that cadence.
Why doing this on User Intuition is different
Most churn tooling makes you choose between coverage and depth. Exit surveys are cheap and shallow. Research firms are deep and slow. CS workflow tools like Gainsight and ChurnZero tell you which customers are at risk based on usage signals but cannot tell you why. User Intuition delivers depth and coverage in the same product.
The AI moderator runs 30-minute structured laddering conversations and probes 5 to 7 levels deep on cancellation triggers, alternative evaluation, and the moments where trust started to erode. Every conversation runs the same methodology, so a study with 15 interviews and a study with 200 interviews are directly comparable. There is no CSM bias, because there is no CSM in the room defending their account work. Churned customers report higher candor with AI than with vendor-affiliated researchers, and the 98% participant satisfaction rate reflects how the experience lands.
Recruitment is built in. The 4M+ research panel covers 50+ languages. Cancelled customers flow in directly from Salesforce or HubSpot via the Contacts integration, or from Stripe via the auto-trigger on cancellation, downgrade, or failed payment. Always-on collection is also a button: drop the embed widget into your cancellation flow, and feedback flows continuously while the trigger event is still fresh.
Synthesis runs in the same product. The Reports tab generates an Executive Summary, Top Insights, Detailed Analysis, and Recommendations, all grounded in verbatim customer quotes. The Intelligence Hub layers across cohorts, so month-one and month-six drivers cluster into compounding insights that any single-cohort report would miss. Pricing at $20/interview makes the stack sustainable as a continuous program at mid-market scale. See the churn analysis solution page for the full positioning.
The 8-step walkthrough
The product flow is the same for every study type. The choices below are tuned for churn analysis specifically.
Step 1: Create Study
Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. Pick the Churn Analysis card. The selected card highlights with a dark background. Click Save & Continue.
The Churn Analysis template pre-loads the conversation flow, screener defaults, and learning goals tuned for cancellation interviews. Starting from the right shape saves 10 minutes versus Custom Design. This template is the one you want for post-cancellation deep-dives, at-risk account research, competitive defection studies, and renewal-window hesitation interviews.
Churn-specific tip: frame the study around “Why don’t people come back?” rather than “Why did they cancel?” The first version probes the alternative the customer is using now, which surfaces the gap that drove them out. See Choose Your Study Type for the full template comparison.
Step 2: Customize Plan
The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines, all in a panel on the right.
For churn, give Charles three things in your opening message. First, the time window: “We are studying customers who cancelled in the last 60 to 90 days.” Second, the segments: “Mid-market B2B SaaS, mix of voluntary and involuntary churn, split by Pro and Business tier.” Third, the use case framing: “We want to know which drivers are recoverable so we can re-engage at six months, and which are preventable so we can fix them next cohort.” Charles will return a plan that asks about the trigger event, relationship timeline, alternative evaluated, and the moment trust started to drop.
Churn-specific tip: tell Charles whether each cohort is voluntary or involuntary churn. Voluntary interviews ladder into emotional drivers and competitive evaluation; involuntary interviews ladder into payment friction, ownership change, and procurement dynamics. Read the Customize Plan docs for the full set of context questions.
Step 3: Review Goals
The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q2 Churn Analysis, Mid-Market SaaS”; rename it to fit your internal naming convention). Below that, every section of the plan, Objective through Interviewer Guidelines, is editable. Highlight, retype, add, delete. Changes save automatically.
This is the step where most teams add two specific coverage areas. First, the trigger event: add a sub-question like “Walk me through the first moment you started thinking about leaving, and what was happening in your business at that time.” Second, the alternative: ask what they are using now or whether they went back to manual. The alternative is the clearest signal of what your product was failing to deliver.
Churn-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Twelve total questions or fewer keeps completion above 70% on cancellation cohorts, which are already lower-completion than active users. The Review Study docs have the full editing toolbar reference. Click Save & Continue when the plan reads the way you want.
Step 4: Setup Interview
The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male, American) and Clara (Female, American). Click the play button on each card to preview the voice. Pick the one whose tone fits your customer audience; click the card to select it. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for churn analysis because voice transcripts capture the emotional emphasis, the hesitation, and the trust-erosion language that chat compresses out, and churn analysis lives in that emotional layer.
Below the format toggles is the screener questions section. Add the highest-leverage churn screener: “When did you cancel your subscription?” filtered to the last 90 days for memory accuracy. For multi-tier studies, add a second screener that captures the plan tier so you can segment results inside the Intelligence Hub. Language and country targeting happens later in the Invite Participants step.
Churn-specific tip: moderator tone matters more in churn than in any other study type. An over-eager AI that pushes too hard on root cause loses candor. Pick Clara for audiences that skew towards customer-success relationship dynamics, and run a longer test conversation at Step 5 to validate the tone reads as listening, not interrogating. Read the Setup Interviewer docs for the full guide.
Step 5: Test Conversation
The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, study introduction, the actual research questions, AI probing on your answers, and a wrap-up. Test conversations do not count against your usage limits, so you can test as many times as you need. After the test, a feedback screen asks “How was the conversation?” with Good or Needs Work options.
For churn specifically, run two test conversations before launch. Run the first as a customer who churned for a fixable reason like a missing integration, and listen for whether the AI probes into the workflow context behind the feature ask. Run the second as a customer who churned because the relationship felt cold and they switched to a competitor, and listen for whether the AI probes past the surface frustration into the specific moments where trust eroded.
Churn-specific tip: if the AI accepts a surface answer or pushes too hard and breaks rapport, go back to Step 3 and tighten the Interviewer Guidelines with instructions like “Probe with empathy, not interrogation. Acknowledge the customer’s frustration before asking the next ‘why’.” Then re-test. Tone validation is the highest-leverage 20 minutes of the entire workflow for churn. The Test Conversation docs cover what to look for.
Step 6: Launch
The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review checklist sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click the Save and Launch button (the one with the rocket icon). Your study goes live immediately. You are redirected to the Study Dashboard.
The detail teams miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step. This is intentional, because most teams want to review the study config with CS or product before triggering recruitment.
Churn-specific tip: launch on a Tuesday-Thursday morning, then push recruitment the same day. Churned-customer response rates skew highest mid-week and decay sharply across weekends, partly because cancelled accounts check email less than active customers. A Tuesday or Wednesday morning launch gives you 3 strong response days. The Launch docs cover what can and cannot change post-launch.
Step 7: Invite Participants
This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to churn recruitment patterns.
Invite Your Customers imports contacts manually, via CSV, or directly from a synced CRM (Salesforce, HubSpot) using the Contacts and Segments tabs. This is how most teams recruit cancelled customers: pull the past 90 days of cancellations as a segment, or sync the Stripe integration to auto-trigger interviews on cancellation, downgrade, or failed payment. Research Panel opens recruitment from the 4M+ panel across 50+ languages, with country and language selection happening here for multilingual studies. Use this for category-level churn research among customers of competing products. Share Link generates a unique URL for your cancellation-flow email, a Zapier workflow, or a personalized note from the CSM who owned the account. Embed Widget drops always-on feedback into your customer portal or cancellation page for continuous churn intelligence, which is the highest-signal placement most SaaS teams overlook.
Churn-specific tip: combine methods. Stripe auto-trigger for involuntary churn, CRM segments for voluntary cancellations by tier, and the embed widget on the cancellation page for the always-on baseline. The Recruiting overview covers all four methods.
Step 8: Review Insights
Once interviews complete (first ones inside hours, full cohorts within 48-72 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row to expand the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; click Regenerate Report as more land.
The third surface is the Intelligence Hub, accessed from the left navigation. Each Hub session is a workspace where you sync studies, upload supplementary files, and ask natural-language questions across everything in the room. For a continuous churn program, this is the compounding layer, and the part most retention teams underuse.
Churn-specific tip: create one Intelligence Hub session called “Churn” at the start of the year, sync every monthly cohort into it, and tag each by lifecycle stage (month 1, month 3, month 6, month 12+). Then ask questions like “How are month-three drivers different from month-six drivers?” or “Which cancellation reasons declined after we rebuilt onboarding in Q2?” Cross-cohort pattern recognition is the part single-month reports cannot replicate. The Intelligence Hub docs and Reports docs cover the full output surface.
Mini case study
RudderStack ran the workflow above on a cohort of 28 customers who had cancelled in the previous 72 days. The CS team set up the study in a single afternoon: Churn Analysis template selected at Step 1, cancellation cohort and segment split briefed to Charles at Step 2, learning goals tightened at Step 3 to include explicit lifecycle-stage coverage, Audio mode and the 90-day timing screener configured at Step 4, two test conversations run at Step 5, study launched mid-week at Step 6.
Recruitment used the Stripe integration to auto-trigger interviews on cancellation events, plus a CRM segment pull for the cancellations that predated the Stripe integration. All 28 interviews completed within 72 hours.
The Reports tab and the Intelligence Hub surfaced a finding that contradicted two years of CS narrative. Price came up as a primary driver in fewer than 5 of 28 conversations. The dominant theme, present in 19 of 28 transcripts, was onboarding abandonment in week two: customers had felt unsupported during the implementation window and never recovered the confidence needed to renew. The team rebuilt onboarding around a structured week-two checkpoint with a named CSM. Within two quarters, churn dropped 22%, which translated to over $800K in retained ARR from a study that cost under $2,000.
“We’d run exit surveys for two years and blamed churn on pricing every quarter. User Intuition interviewed 28 churned customers in 72 hours. Price came up in fewer than five conversations. The real driver was onboarding abandonment, customers felt unsupported in week two and never recovered.”
Eric O., COO, RudderStack
What gets the best results from churn analysis on User Intuition?
Tips for getting the most out of the workflow, gathered from the patterns that consistently produce retention movement.
Keep the research plan tight. Aim for 4 to 6 main themes with 2 to 4 sub-questions each at Step 3. Plans with 8+ themes push interviews past 25 minutes and completion drops sharply. Twelve total questions or fewer keeps completion above 70% on cancellation cohorts.
Run two test conversations every time. One as the fixable-reason narrative (missing integration, workflow gap), one as the trust-erosion narrative (relationship cold, switched to competitor). The highest-leverage 20 minutes before launch is catching tone misalignment while it is still cheap to fix.
Pick voice over chat for churn. Voice transcripts surface the emotional emphasis, hesitation, and trust-erosion language that chat compresses out. Reserve chat for international cohorts where typing is a better experience.
Recruit on the trigger event, not on a calendar. Stripe auto-triggers land in the Dashboard while the decision is fresh. CRM batch pulls 60 days later land after the customer has rationalized the story into something tidier than what happened. Always-on beats batch.
Run continuous, not annual. 15 to 25 interviews per month at $20/interview on the Pro plan compounds into a sharper picture than one 50-interview annual post-mortem. Lifecycle-stage drivers (week two vs month three vs month six) only emerge with continuous coverage.
Sync the Intelligence Hub once per month. Every cohort should land in the same Churn session on a regular cadence. Cross-cohort pattern recognition is the highest-leverage output User Intuition produces for retention. See the cost breakdown and the comparison of dedicated churn analysis platforms for the budgeting case.
Route insights inside 7 days. Retention findings degrade fast because the next renewal cohort is already in flight. Route findings to owners (CS, product, RevOps, marketing) with a defined SLA. The same compounding logic underpins the SaaS churn workflow for product-led teams. The studies that move retention reach a decision-maker the same week.
FAQ
The frontmatter FAQ block above answers the most common questions. The full canonical answers also surface inline in the workflow above.