SaaS user research templates eliminate the blank-page problem that delays studies by days or weeks. Every hour spent designing a study from scratch is an hour the research is not in the field.
These templates cover the 6 research types that SaaS product teams run most frequently. Each includes pre-built objectives, participant criteria, discussion guides, and analysis frameworks. Copy the template, customize for your product context, and launch. With AI-moderated interviews, you go from template to live study in under 10 minutes.
Template 1: Churn Diagnosis
Research objective: Understand why customers cancel and identify the retention interventions that would have changed their decision.
When to use: Monthly (rolling program) or after a churn spike.
Target participants:
- Customers who canceled in the last 30 days
- At-risk customers showing usage decline (if identifiable)
- Customers who downgraded plans
Sample size: 30-50 per segment (SMB, mid-market, enterprise)
Screening criteria:
- Was a paying customer for 3+ months (avoids trial-only churn)
- Canceled within the last 30 days (fresh memory)
- Used the product at least weekly during active subscription
Discussion Guide
- “Walk me through when you first started thinking about canceling. What was happening at that point?”
- “What changed between when things were working and when they weren’t?”
- “Did you raise any of these concerns with our support team or your account manager? What happened?”
- “What would have needed to change for you to stay?”
- “What are you using instead now? How is it going?”
- “How did pricing factor into your decision? Was there a point where the value felt different?”
- “If you could change one thing about the product that would have prevented this, what would it be?”
- “What would you tell a colleague who asked whether they should use our product?”
Analysis framework:
- Categorize churn drivers: product gaps, onboarding failure, competitive displacement, pricing, champion loss, organizational change
- Map each driver to the retention intervention window (when could CS have intervened?)
- Quantify driver distribution: what percentage of churn traces to each category?
- Identify recoverable vs. unrecoverable churn segments
Decision this informs: Product backlog prioritization, CS intervention playbooks, pricing strategy, competitive response
Cost estimate: 50 interviews at $20/each = $1,000 + $1,250-$2,500 incentives = $2,250-$3,500 total
For deeper methodology on churn research, see the complete churn analysis playbook.
Template 2: Win-Loss Analysis
Research objective: Understand why prospects chose you (or chose a competitor) and what drove the purchase decision.
When to use: Monthly (rolling program). Continuous programs produce the most value.
Target participants:
- Recently won customers (closed in last 30-60 days)
- Recently lost prospects (chose competitor or no decision in last 30-60 days)
- Trial users who converted or abandoned
Sample size: 20-30 per outcome (won vs. lost)
Screening criteria:
- Evaluated your product within the last 60 days
- Was involved in the purchase decision (not just an end-user)
- Completed at least a product demo or free trial
Discussion Guide
- “Walk me through how you first identified the need for a tool in this space.”
- “What solutions did you evaluate? How did you build that shortlist?”
- “For each option you considered, what stood out — positively and negatively?”
- “What was the most important factor in your final decision?”
- “Was there a specific moment during the evaluation where your preference shifted?”
- “How did internal stakeholders at your company influence the decision?”
- “How did pricing factor in? Was there a pricing structure you preferred?”
- “What almost changed your mind?”
- “If you could improve one thing about your evaluation experience with us, what would it be?”
- “What would you tell a peer evaluating tools in this space?”
Analysis framework:
- Map the evaluation process: awareness trigger, shortlist formation, evaluation criteria, decision moment
- Identify the top 3 win drivers and top 3 loss drivers
- Segment by competitor: which specific competitors win deals from you, and why?
- Track trends: are loss patterns shifting quarter over quarter?
Decision this informs: Competitive positioning, sales enablement, product roadmap, pricing strategy
Cost estimate: 50 interviews = $1,000 + $2,500-$5,000 incentives (lost prospects require higher incentives) = $3,500-$6,000 total
For the complete methodology, see the SaaS win-loss analysis playbook.
Template 3: Feature Validation
Research objective: Determine whether a proposed feature solves a real problem and would drive adoption or willingness to pay.
When to use: Before committing engineering resources to a feature build. Every sprint, as needed.
Target participants:
- Active users in the target persona
- Users who have requested this feature (if trackable)
- Users of competitive products that have this feature
Sample size: 20-30 per persona
Screening criteria:
- Active user of the relevant product area
- Has experienced the problem the feature addresses (not hypothetical)
- Represents the target persona (role, company size, use case)
Discussion Guide
- “Walk me through how you currently handle [the problem this feature solves].”
- “What is the most frustrating part of your current approach?”
- “How much time do you spend on this each week?”
- “Have you tried other tools or workarounds for this? What happened?”
- “If [Feature X] existed in the product, when this week would you use it?”
- “What would [Feature X] need to do for you to stop using your current workaround?”
- “Who else on your team would use this? How would they use it differently?”
- “How would you explain what this feature does to a colleague?”
Analysis framework:
- Problem severity: how many users experience this problem, and how painful is it?
- Current workarounds: what solutions exist, and what is the switching threshold?
- Adoption signal: can users describe specific, concrete use occasions?
- Team dimension: does the feature drive individual utility or team adoption?
Decision this informs: Build / defer / kill decision for the proposed feature
Cost estimate: 25 interviews = $500 + $625-$1,250 incentives = $1,125-$1,750 total
Template 4: Onboarding Research
Research objective: Identify why users who sign up fail to activate, and what changes to the onboarding flow would improve activation rates.
When to use: Quarterly or after major onboarding flow changes. Critical after every significant activation rate change.
Target participants:
- Users who signed up in the last 30 days and activated (controls)
- Users who signed up in the last 30 days and did not activate (test group)
- Users who signed up and churned within the first 30 days
Sample size: 20-30 per cohort (activated vs. not activated)
Screening criteria:
- Signed up within the last 30 days (recent experience)
- Represents target customer profile (not test accounts or competitors)
- Completed at least one login after signup
Discussion Guide
- “What were you trying to accomplish when you signed up?”
- “Walk me through your first session with the product. What did you do first?”
- “At what point did you feel confident this would work for your use case?” (Or: “What prevented you from reaching that point?”)
- “What almost made you give up during setup?”
- “What did you expect to see on your first login that you didn’t find?”
- “Did you use any help resources — docs, tutorials, chat? What prompted that?”
- “How long before the product felt natural to use?”
- “What one thing would you change about the getting-started experience?”
Analysis framework:
- Map the intended onboarding flow against the actual first-session flow users report
- Identify the activation gap: where does the user’s journey diverge from the intended path?
- Categorize friction types: confusion, missing features, incorrect expectations, integration blockers
- Compare activated vs. non-activated: what differs in their first-session experience?
Decision this informs: Onboarding flow redesign, first-run experience, welcome email content, documentation priorities
Cost estimate: 40 interviews = $800 + $1,000-$2,000 incentives = $1,800-$2,800 total
For relevant interview questions, see the SaaS interview question guide.
Template 5: Competitive Intelligence
Research objective: Understand how your product is perceived relative to alternatives and what drives competitive wins and losses.
When to use: Quarterly. Also after a new competitor launches or an existing competitor ships a major update.
Target participants:
- Current customers who previously used a competitor
- Prospects who evaluated you but chose a competitor
- Users who currently use both your product and a competitor
Sample size: 30-50 across segments
Screening criteria:
- Has used or evaluated at least one competitive product in the last 12 months
- Was involved in the tool evaluation or selection decision
- Represents your target market (not a misfit segment)
Discussion Guide
- “What tools have you used or evaluated for [this use case] in the past year?”
- “For each tool, what stood out as a strength? What was the biggest limitation?”
- “How would you describe what makes [our product] different from alternatives?”
- “What does [our product] do that nothing else does as well?”
- “If [our product] disappeared tomorrow, what would you use instead? Why?”
- “What is one thing a competitor does better than us?”
- “What do you hear from colleagues at other companies about how they solve this problem?”
- “When you last saw a demo or ad for an alternative, what caught your attention?”
- “What would a competitor need to offer to make you switch?”
Analysis framework:
- Map the competitive landscape from the user’s perspective (may differ from your internal view)
- Identify differentiation: what do users consistently cite as your unique value?
- Identify vulnerabilities: where do competitors consistently outperform you?
- Track competitive positioning trends: how is perception shifting over time?
Decision this informs: Competitive positioning, marketing messaging, product roadmap defensive priorities, sales battlecards
Cost estimate: 40 interviews = $800 + $2,000-$4,000 incentives = $2,800-$4,800 total
Template 6: Pricing and Packaging Research
Research objective: Understand how customers perceive pricing, what drives plan selection, and where pricing creates friction or upgrade barriers.
When to use: Before pricing changes. Semi-annually for ongoing monitoring.
Target participants:
- Customers across different plan tiers
- Customers who recently upgraded or downgraded
- Trial users who did or did not convert (for trial-to-paid research)
Sample size: 30-50 across plan tiers
Screening criteria:
- Has been a customer for 3+ months (understands the value)
- Made the plan selection or upgrade decision
- Represents your target segments
Discussion Guide
- “Walk me through how you chose your current plan.”
- “What do you feel like you’re paying for? What are you paying for but not using?”
- “If the price doubled tomorrow, what would you do?”
- “How does the cost of [Product] compare to the cost of the problem it solves?”
- “Would you rather pay per seat, per usage, or a flat monthly fee? Why?”
- “What feature or change would make you upgrade to the next tier?”
- “How do you explain the cost to your finance team or manager?”
- “If you were designing the pricing, what would you change?”
Analysis framework:
- Value perception: what do users believe they are paying for?
- Price sensitivity: how elastic is demand at current pricing?
- Packaging alignment: do plan tiers match how customers use the product?
- Upgrade barriers: what prevents customers from moving to higher tiers?
- Competitive pricing context: how do users compare your pricing to alternatives?
Decision this informs: Pricing strategy, plan structure, feature packaging, upgrade flow design
Cost estimate: 40 interviews = $800 + $1,000-$2,000 incentives = $1,800-$2,800 total
How Do You Customize These Templates?
Each template is a starting point. Customize based on your context:
- Replace generic language with your product name, feature names, and competitor names
- Adjust screening criteria to match your specific customer segments
- Add 1-2 product-specific questions relevant to your current sprint priorities
- Remove questions that are not relevant to your immediate decision (keep studies focused at 8-12 questions)
- Set participant incentives based on your audience: $15-$25 for consumer SaaS, $25-$75 for B2B professionals, $100-$200 for executives
The Template Cadence: Putting It All Together
For continuous SaaS discovery, run templates on a rotating schedule:
| Frequency | Template | Sample | Budget/Cycle |
|---|---|---|---|
| Monthly (rolling) | Churn diagnosis | 30 | ~$1,500 |
| Monthly (rolling) | Win-loss analysis | 30 | ~$2,500 |
| Per sprint (as needed) | Feature validation | 20 | ~$1,000 |
| Quarterly | Onboarding research | 40 | ~$2,000 |
| Quarterly | Competitive intelligence | 40 | ~$3,000 |
| Semi-annually | Pricing research | 40 | ~$2,000 |
Annual budget: ~$18,000-$24,000 for 1,000+ interviews across every research type. That is less than a single traditional agency engagement for a research infrastructure that compounds every sprint.
Each study feeds the Intelligence Hub, where findings from churn analysis connect to win-loss patterns connect to feature validation results. The templates do not just produce individual reports. They build a cumulative knowledge base that makes every subsequent study more valuable.