← Insights & Guides · Updated · 9 min read

SaaS User Research Template: Study Design Guide

By Kevin, Founder & CEO

SaaS user research templates eliminate the blank-page problem that delays studies by days or weeks. Every hour spent designing a study from scratch is an hour the research is not in the field.

These templates cover the 6 research types that SaaS product teams run most frequently. Each includes pre-built objectives, participant criteria, discussion guides, and analysis frameworks. Copy the template, customize for your product context, and launch. With AI-moderated interviews, you go from template to live study in under 10 minutes.

Template 1: Churn Diagnosis


Research objective: Understand why customers cancel and identify the retention interventions that would have changed their decision.

When to use: Monthly (rolling program) or after a churn spike.

Target participants:

  • Customers who canceled in the last 30 days
  • At-risk customers showing usage decline (if identifiable)
  • Customers who downgraded plans

Sample size: 30-50 per segment (SMB, mid-market, enterprise)

Screening criteria:

  • Was a paying customer for 3+ months (avoids trial-only churn)
  • Canceled within the last 30 days (fresh memory)
  • Used the product at least weekly during active subscription

Discussion Guide

  1. “Walk me through when you first started thinking about canceling. What was happening at that point?”
  2. “What changed between when things were working and when they weren’t?”
  3. “Did you raise any of these concerns with our support team or your account manager? What happened?”
  4. “What would have needed to change for you to stay?”
  5. “What are you using instead now? How is it going?”
  6. “How did pricing factor into your decision? Was there a point where the value felt different?”
  7. “If you could change one thing about the product that would have prevented this, what would it be?”
  8. “What would you tell a colleague who asked whether they should use our product?”

Analysis framework:

  • Categorize churn drivers: product gaps, onboarding failure, competitive displacement, pricing, champion loss, organizational change
  • Map each driver to the retention intervention window (when could CS have intervened?)
  • Quantify driver distribution: what percentage of churn traces to each category?
  • Identify recoverable vs. unrecoverable churn segments

Decision this informs: Product backlog prioritization, CS intervention playbooks, pricing strategy, competitive response

Cost estimate: 50 interviews at $20/each = $1,000 + $1,250-$2,500 incentives = $2,250-$3,500 total

For deeper methodology on churn research, see the complete churn analysis playbook.

Template 2: Win-Loss Analysis


Research objective: Understand why prospects chose you (or chose a competitor) and what drove the purchase decision.

When to use: Monthly (rolling program). Continuous programs produce the most value.

Target participants:

  • Recently won customers (closed in last 30-60 days)
  • Recently lost prospects (chose competitor or no decision in last 30-60 days)
  • Trial users who converted or abandoned

Sample size: 20-30 per outcome (won vs. lost)

Screening criteria:

  • Evaluated your product within the last 60 days
  • Was involved in the purchase decision (not just an end-user)
  • Completed at least a product demo or free trial

Discussion Guide

  1. “Walk me through how you first identified the need for a tool in this space.”
  2. “What solutions did you evaluate? How did you build that shortlist?”
  3. “For each option you considered, what stood out — positively and negatively?”
  4. “What was the most important factor in your final decision?”
  5. “Was there a specific moment during the evaluation where your preference shifted?”
  6. “How did internal stakeholders at your company influence the decision?”
  7. “How did pricing factor in? Was there a pricing structure you preferred?”
  8. “What almost changed your mind?”
  9. “If you could improve one thing about your evaluation experience with us, what would it be?”
  10. “What would you tell a peer evaluating tools in this space?”

Analysis framework:

  • Map the evaluation process: awareness trigger, shortlist formation, evaluation criteria, decision moment
  • Identify the top 3 win drivers and top 3 loss drivers
  • Segment by competitor: which specific competitors win deals from you, and why?
  • Track trends: are loss patterns shifting quarter over quarter?

Decision this informs: Competitive positioning, sales enablement, product roadmap, pricing strategy

Cost estimate: 50 interviews = $1,000 + $2,500-$5,000 incentives (lost prospects require higher incentives) = $3,500-$6,000 total

For the complete methodology, see the SaaS win-loss analysis playbook.

Template 3: Feature Validation


Research objective: Determine whether a proposed feature solves a real problem and would drive adoption or willingness to pay.

When to use: Before committing engineering resources to a feature build. Every sprint, as needed.

Target participants:

  • Active users in the target persona
  • Users who have requested this feature (if trackable)
  • Users of competitive products that have this feature

Sample size: 20-30 per persona

Screening criteria:

  • Active user of the relevant product area
  • Has experienced the problem the feature addresses (not hypothetical)
  • Represents the target persona (role, company size, use case)

Discussion Guide

  1. “Walk me through how you currently handle [the problem this feature solves].”
  2. “What is the most frustrating part of your current approach?”
  3. “How much time do you spend on this each week?”
  4. “Have you tried other tools or workarounds for this? What happened?”
  5. “If [Feature X] existed in the product, when this week would you use it?”
  6. “What would [Feature X] need to do for you to stop using your current workaround?”
  7. “Who else on your team would use this? How would they use it differently?”
  8. “How would you explain what this feature does to a colleague?”

Analysis framework:

  • Problem severity: how many users experience this problem, and how painful is it?
  • Current workarounds: what solutions exist, and what is the switching threshold?
  • Adoption signal: can users describe specific, concrete use occasions?
  • Team dimension: does the feature drive individual utility or team adoption?

Decision this informs: Build / defer / kill decision for the proposed feature

Cost estimate: 25 interviews = $500 + $625-$1,250 incentives = $1,125-$1,750 total

Template 4: Onboarding Research


Research objective: Identify why users who sign up fail to activate, and what changes to the onboarding flow would improve activation rates.

When to use: Quarterly or after major onboarding flow changes. Critical after every significant activation rate change.

Target participants:

  • Users who signed up in the last 30 days and activated (controls)
  • Users who signed up in the last 30 days and did not activate (test group)
  • Users who signed up and churned within the first 30 days

Sample size: 20-30 per cohort (activated vs. not activated)

Screening criteria:

  • Signed up within the last 30 days (recent experience)
  • Represents target customer profile (not test accounts or competitors)
  • Completed at least one login after signup

Discussion Guide

  1. “What were you trying to accomplish when you signed up?”
  2. “Walk me through your first session with the product. What did you do first?”
  3. “At what point did you feel confident this would work for your use case?” (Or: “What prevented you from reaching that point?”)
  4. “What almost made you give up during setup?”
  5. “What did you expect to see on your first login that you didn’t find?”
  6. “Did you use any help resources — docs, tutorials, chat? What prompted that?”
  7. “How long before the product felt natural to use?”
  8. “What one thing would you change about the getting-started experience?”

Analysis framework:

  • Map the intended onboarding flow against the actual first-session flow users report
  • Identify the activation gap: where does the user’s journey diverge from the intended path?
  • Categorize friction types: confusion, missing features, incorrect expectations, integration blockers
  • Compare activated vs. non-activated: what differs in their first-session experience?

Decision this informs: Onboarding flow redesign, first-run experience, welcome email content, documentation priorities

Cost estimate: 40 interviews = $800 + $1,000-$2,000 incentives = $1,800-$2,800 total

For relevant interview questions, see the SaaS interview question guide.

Template 5: Competitive Intelligence


Research objective: Understand how your product is perceived relative to alternatives and what drives competitive wins and losses.

When to use: Quarterly. Also after a new competitor launches or an existing competitor ships a major update.

Target participants:

  • Current customers who previously used a competitor
  • Prospects who evaluated you but chose a competitor
  • Users who currently use both your product and a competitor

Sample size: 30-50 across segments

Screening criteria:

  • Has used or evaluated at least one competitive product in the last 12 months
  • Was involved in the tool evaluation or selection decision
  • Represents your target market (not a misfit segment)

Discussion Guide

  1. “What tools have you used or evaluated for [this use case] in the past year?”
  2. “For each tool, what stood out as a strength? What was the biggest limitation?”
  3. “How would you describe what makes [our product] different from alternatives?”
  4. “What does [our product] do that nothing else does as well?”
  5. “If [our product] disappeared tomorrow, what would you use instead? Why?”
  6. “What is one thing a competitor does better than us?”
  7. “What do you hear from colleagues at other companies about how they solve this problem?”
  8. “When you last saw a demo or ad for an alternative, what caught your attention?”
  9. “What would a competitor need to offer to make you switch?”

Analysis framework:

  • Map the competitive landscape from the user’s perspective (may differ from your internal view)
  • Identify differentiation: what do users consistently cite as your unique value?
  • Identify vulnerabilities: where do competitors consistently outperform you?
  • Track competitive positioning trends: how is perception shifting over time?

Decision this informs: Competitive positioning, marketing messaging, product roadmap defensive priorities, sales battlecards

Cost estimate: 40 interviews = $800 + $2,000-$4,000 incentives = $2,800-$4,800 total

Template 6: Pricing and Packaging Research


Research objective: Understand how customers perceive pricing, what drives plan selection, and where pricing creates friction or upgrade barriers.

When to use: Before pricing changes. Semi-annually for ongoing monitoring.

Target participants:

  • Customers across different plan tiers
  • Customers who recently upgraded or downgraded
  • Trial users who did or did not convert (for trial-to-paid research)

Sample size: 30-50 across plan tiers

Screening criteria:

  • Has been a customer for 3+ months (understands the value)
  • Made the plan selection or upgrade decision
  • Represents your target segments

Discussion Guide

  1. “Walk me through how you chose your current plan.”
  2. “What do you feel like you’re paying for? What are you paying for but not using?”
  3. “If the price doubled tomorrow, what would you do?”
  4. “How does the cost of [Product] compare to the cost of the problem it solves?”
  5. “Would you rather pay per seat, per usage, or a flat monthly fee? Why?”
  6. “What feature or change would make you upgrade to the next tier?”
  7. “How do you explain the cost to your finance team or manager?”
  8. “If you were designing the pricing, what would you change?”

Analysis framework:

  • Value perception: what do users believe they are paying for?
  • Price sensitivity: how elastic is demand at current pricing?
  • Packaging alignment: do plan tiers match how customers use the product?
  • Upgrade barriers: what prevents customers from moving to higher tiers?
  • Competitive pricing context: how do users compare your pricing to alternatives?

Decision this informs: Pricing strategy, plan structure, feature packaging, upgrade flow design

Cost estimate: 40 interviews = $800 + $1,000-$2,000 incentives = $1,800-$2,800 total

How Do You Customize These Templates?


Each template is a starting point. Customize based on your context:

  1. Replace generic language with your product name, feature names, and competitor names
  2. Adjust screening criteria to match your specific customer segments
  3. Add 1-2 product-specific questions relevant to your current sprint priorities
  4. Remove questions that are not relevant to your immediate decision (keep studies focused at 8-12 questions)
  5. Set participant incentives based on your audience: $15-$25 for consumer SaaS, $25-$75 for B2B professionals, $100-$200 for executives

The Template Cadence: Putting It All Together


For continuous SaaS discovery, run templates on a rotating schedule:

FrequencyTemplateSampleBudget/Cycle
Monthly (rolling)Churn diagnosis30~$1,500
Monthly (rolling)Win-loss analysis30~$2,500
Per sprint (as needed)Feature validation20~$1,000
QuarterlyOnboarding research40~$2,000
QuarterlyCompetitive intelligence40~$3,000
Semi-annuallyPricing research40~$2,000

Annual budget: ~$18,000-$24,000 for 1,000+ interviews across every research type. That is less than a single traditional agency engagement for a research infrastructure that compounds every sprint.

Each study feeds the Intelligence Hub, where findings from churn analysis connect to win-loss patterns connect to feature validation results. The templates do not just produce individual reports. They build a cumulative knowledge base that makes every subsequent study more valuable.

Frequently Asked Questions

Each template includes the research objective, participant criteria, recommended sample size, 8-12 discussion guide questions, and an analysis framework. Select the template that matches your research question, customize the questions for your product context, define your participant criteria, and launch. With AI-moderated platforms, you can go from template to live study in under 10 minutes.
Churn analysis: 30-50 per segment. Win-loss: 20-30 per outcome (won vs. lost). Feature validation: 20-30 per persona. Onboarding research: 20-30 per cohort. Competitive intelligence: 30-50 across evaluators and switchers. Pricing research: 30-50 across plan tiers. These sizes reach thematic saturation while remaining cost-effective at $20/interview.
No. Each template targets one research objective with 8-12 focused questions. Combining churn and feature validation questions in one study produces shallow answers across both topics. Run separate studies for separate questions — at $200-$1,000 per study, the cost of focus is negligible.
Churn analysis: monthly (rolling). Win-loss: monthly (rolling). Feature validation: every sprint (as needed). Onboarding research: quarterly or after major flow changes. Competitive intelligence: quarterly. Pricing research: semi-annually or before pricing changes. The Intelligence Hub tracks changes over time when you rerun templates consistently.
Keep the template's research objective, sample size, and analysis framework unchanged. Customize only the discussion guide questions by replacing generic product references with your specific feature names, user types, or competitive context. Add 1-2 questions specific to your product's unique dynamics but stay within the template's total question count. Changing structure reduces comparability if you rerun the study later.
For churn analysis, categorize findings into three buckets: product factors (features missing, performance issues, usability friction), relationship factors (support quality, onboarding experience, account management), and competitive factors (a specific competitor's offer or price). Rank by frequency across interviews. The bucket with the highest frequency and the specific finding within it that appears most often is your highest-priority intervention target.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours