← Insights & Guides · Updated · 14 min read

SaaS User Research Interview Questions: 60 for PMs

By Kevin, Founder & CEO

SaaS user research interview questions are the difference between research that changes product decisions and research that confirms what the team already believed. Most question lists recycle generic prompts — “What do you like about the product?” “What would you improve?” — that produce polite, surface-level answers no PM can act on.

This guide provides 60 questions organized by the research decision they inform. Each question includes the probing logic — why this question works, what it surfaces, and what follow-up direction the AI moderator (or human interviewer) should pursue. These are not abstract research questions. They are designed for SaaS product teams running AI-moderated interviews at sprint speed, where every question needs to earn its place in a 30-minute conversation.

Onboarding and Activation Research (Questions 1-10)


Onboarding research answers one question: why do users who signed up fail to become active? Product analytics shows the drop-off. These questions reveal the reasoning behind it.

1. “Walk me through your first day with [Product]. What did you do first, and why?”

This reconstructs the activation sequence from the user’s perspective. Most teams design onboarding based on the intended flow. Users rarely follow it. The gap between intended and actual first-day behavior reveals where onboarding fails before the product even gets a fair trial.

2. “What were you trying to accomplish in that first session?”

Surfaces the job-to-be-done that drove signup. If the user’s goal does not match your activation milestone, onboarding is solving the wrong problem. A project management tool might define activation as “created a project” while the user’s actual first goal was “imported data from their old tool.”

3. “At what point did you feel confident this would work for your use case?”

Identifies the “aha moment” — the specific interaction where value became real. If users cannot answer this question, they may still be evaluating. If the aha moment comes late (day 5+ for a daily-use tool), there is a time-to-value problem.

4. “What almost made you give up during setup?”

Directly surfaces friction points. “Almost” is key — these users persisted, but others did not. The friction they describe is likely filtering out less patient prospects. Common reveals: confusing permissions, unclear data import, missing integrations, overwhelming feature sets.

5. “Who else at your company was involved in getting started?”

B2B SaaS onboarding often fails at the organizational layer, not the product layer. If the buyer cannot onboard their team without scheduling a training session, adoption bottlenecks compound. This question surfaces whether onboarding is designed for individuals or for the teams that actually need to adopt.

6. “What did you expect to see on your first login that you didn’t find?”

Expectation gaps kill activation. If marketing promises real-time analytics and the first screen is an empty dashboard with a “set up your data connections” prompt, the gap between expectation and reality creates immediate friction. These reveals drive landing page, email, and onboarding flow redesigns.

7. “How did you decide which features to try first?”

Reveals the user’s mental model of the product. If they tried the features in an order your team did not anticipate, the product’s information architecture may not match how users think about the problem space.

8. “Did you watch any tutorials, read any docs, or ask anyone for help? What prompted that?”

Identifies where self-serve onboarding breaks. If users are reaching for documentation on day one, the product is not self-explanatory at the points where they get stuck. The specific triggers reveal which product surfaces need improvement.

9. “What is one thing you wish someone had told you before you started?”

Captures the “known unknowns” — context that would have accelerated activation but was not surfaced. These responses often directly translate into onboarding tooltips, welcome email content, and first-run experience improvements.

10. “How long before you felt like you could use [Product] without thinking about how it works?”

Measures time-to-fluency. For a SaaS tool used daily, fluency should arrive within the first week. If users report weeks or months to reach fluency, the learning curve is a churn risk — users will find simpler alternatives before they master yours.

Feature Adoption and Validation (Questions 11-20)


Feature research answers: should we build this, and will people use it? These questions go beyond “would you use Feature X?” (a question that produces unreliable yes answers) to understand the actual behavior and context that determines adoption.

11. “Walk me through how you currently handle [the problem this feature solves].”

The single most important feature validation question. Current behavior reveals real need. If the user has a working solution (even a clunky one), your feature competes against the status quo — not against nothing. If they have no solution, the problem may not be important enough to justify a feature.

12. “What is the most annoying part of your current workaround?”

Identifies the specific friction that creates switching motivation. The answer to this question is your feature’s value proposition. If users cannot articulate what annoys them about their current approach, the pain is not acute enough to drive adoption.

13. “If this feature existed tomorrow, when would you use it this week?”

Forces concrete usage scenarios instead of abstract interest. “I’d use it every Monday during planning” is a strong adoption signal. “I’d probably check it out sometime” is not. The specificity of the answer predicts actual adoption.

14. “Who on your team would also use this? How would they use it differently?”

Surfaces the team adoption dimension. Features that serve one role create utility. Features that serve the team create retention. Understanding multi-user dynamics reveals whether the feature strengthens organizational lock-in.

15. “What would this feature need to do for you to stop using [current workaround]?”

Defines the minimum viable feature. Users will articulate the bar your feature must clear to earn a behavior change. This prevents both under-building (missing critical capabilities) and over-building (adding complexity beyond the switching threshold).

16. “How would you explain what this feature does to a colleague in one sentence?”

Tests whether the feature concept is clear enough to spread through word of mouth. If users struggle to articulate it, the feature either does too many things, has unclear positioning, or solves a problem users do not have language for yet.

17. “What existing feature in [Product] do you wish worked differently? Tell me about a specific time it frustrated you.”

Captures specific friction incidents rather than general satisfaction ratings. “The reporting module is fine” tells you nothing. “Last Tuesday I spent 40 minutes trying to filter by date range and eventually exported to Excel” tells you exactly what to fix.

18. “If you could change one thing about how [Product] works, what would it be and why?”

Open-ended feature prioritization from the user’s perspective. The “why” is as important as the “what” — it reveals the underlying need, which may have multiple solutions beyond the user’s suggestion.

19. “What feature did you expect [Product] to have that it doesn’t?”

Surfaces feature expectations set by marketing, competitors, or the user’s mental model of the product category. These gaps are churn risks — users who expect a capability and do not find it start evaluating alternatives.

20. “How much time per week do you spend on tasks that [Product] should handle but doesn’t?”

Quantifies the pain in time — the most persuasive metric for feature prioritization. “I spend 3 hours a week manually updating this spreadsheet because your product does not sync with our CRM” makes a stronger case than any NPS score.

Churn and Retention Research (Questions 21-30)


Churn research requires reconstructing the decision timeline. These questions move past the stated reason (“too expensive”) to the actual driver (the product stopped being worth the price because the champion who evangelized it left the company).

21. “When did you first think about canceling? What was happening with the product at that point?”

Anchors the churn timeline to a specific moment and context. The trigger event is almost never the reason listed on the cancellation form. A user who says “too expensive” may have started evaluating costs only after a product outage eroded trust.

22. “Walk me through the last month before you canceled. What changed?”

Reconstructs the decision sequence. Churn is usually a process, not an event. This question surfaces the escalation pattern — from mild annoyance, to active friction, to evaluation of alternatives, to the decision to leave.

23. “What would have needed to happen for you to stay?”

Identifies the retention intervention that would have worked — and when it needed to happen. This directly informs CS playbooks and product retention features. If the answer is “nothing,” the product-market fit was poor for this segment from the start.

24. “Did you talk to anyone at [Company] about these issues before canceling?”

Reveals whether CS had the opportunity to intervene and failed, or never had the opportunity because the user did not signal distress. If most churned customers never contacted support, the early-warning system needs redesign.

25. “What are you using instead? How is it working?”

Surfaces the actual competitive displacement. What users switch to reveals what they value. If they downgrade to a spreadsheet, the product was over-built for their needs. If they switch to a competitor, the competitive gap needs investigation.

26. “If we fixed [the issue they raised], would you come back?”

Tests whether the churn is recoverable. Users who say “probably, yes” represent a winback segment. Users who say “I’ve moved on” reveal that the issue was a symptom, not the cause.

27. “What would you tell a colleague who asked whether they should use [Product]?”

Captures the post-churn narrative the customer will spread. This is reputation intelligence. If churned users would still recommend the product with caveats (“great for small teams but breaks at scale”), the positioning issue is clear.

28. “Was there a specific incident that pushed you from ‘thinking about it’ to actually canceling?”

Identifies the triggering event. Often it is something small — a failed export, a support interaction that felt dismissive, a competitor ad that arrived at the right moment. The trigger reveals the breaking point in already-fragile relationships.

29. “How did pricing factor into your decision?”

Price sensitivity testing through behavioral reconstruction rather than hypothetical questioning. Users who say “price was a factor” almost always mean “the value no longer justified the price.” The follow-up is always: what changed about the value?

30. “What did we do really well? What will you miss?”

Captures retained value perception. Understanding what churned customers valued reveals your product’s actual strengths (vs. what you think your strengths are) and identifies features that create stickiness even when overall satisfaction declines.

Competitive Intelligence (Questions 31-40)


Competitive research surfaces what actually drives purchase and switching decisions. These questions avoid “what do you think of Competitor X?” framing, which produces comparison shopping answers instead of decision insights.

31. “Walk me through the last time you evaluated tools in this space. What did you look at and what drove your final decision?”

Reconstructs the evaluation process chronologically. The sequence matters — which tools were considered first, which were eliminated and why, and what criteria determined the final choice. This reveals both the consideration set and the decision framework.

32. “What does [Product] do that nothing else you’ve tried does as well?”

Identifies your defensible differentiation from the user’s perspective. If users consistently name the same capability, that is your competitive moat. If answers vary widely, differentiation is unclear in the market.

33. “If [Product] disappeared tomorrow, what would you use instead?”

Reveals the closest substitute and the switching cost. If users immediately name a competitor, the switching cost is low and competitive risk is high. If they say “I’d have to cobble together three tools,” the integration value is your retention advantage.

34. “What do you hear from colleagues at other companies about how they solve this problem?”

Surfaces competitive intelligence from word-of-mouth networks. Users often know what competitors are doing through peer conversations. This captures competitive positioning as it actually exists in buyer conversations, not as it appears on feature comparison pages.

35. “When you last saw a demo or ad for an alternative, what caught your attention?”

Identifies which competitor messaging is resonating. If users remember a specific claim (“they said they could cut our reporting time by 50%”), that is a competitive vulnerability in your positioning that needs response.

36. “What is the one thing a competitor would need to offer to make you switch?”

Defines your retention threshold. The answer reveals both your weaknesses and the switching cost users perceive. If the bar is low (“better mobile app”), you are vulnerable. If the bar is high (“they’d need to replicate our entire workflow”), you have strong lock-in.

37. “How do you describe what [Product] does when someone asks?”

Captures your actual market positioning — not what marketing says, but what users tell each other. If users describe a narrow use case (“it’s a survey tool”) when you position as a platform (“end-to-end customer intelligence”), there is a perception gap.

38. “What frustrated you about other tools you tried before [Product]?”

Identifies the competitor weaknesses that drove users to you. These frustrations are your competitive advantages and should be prominent in your win-loss analysis and sales enablement materials.

39. “If you had to cut your tool budget by 30%, which tools would survive and which would go?”

Tests your product’s position in the “essential vs. nice-to-have” hierarchy. Products that survive budget cuts have strong value perception. Products that go first have a perceived-value problem that will surface during any economic tightening.

40. “What integrations do you wish existed between [Product] and your other tools?”

Integration gaps are among the most common competitive vulnerabilities in SaaS. Users who cannot connect your product to their workflow will eventually choose a competitor that connects natively, regardless of feature superiority elsewhere.

Pricing and Packaging Research (Questions 41-50)


Pricing research in SaaS is treacherous because users systematically misrepresent their willingness to pay. These questions use behavioral anchoring and contextual framing to surface actual price sensitivity.

41. “Walk me through how you decided which plan to choose.”

Reconstructs the pricing evaluation process. Reveals which features drove the plan selection, what felt unclear about the pricing page, and whether the chosen plan was the right fit or a compromise.

42. “What do you feel like you’re paying for? What do you feel like you’re paying for but not using?”

Identifies perceived value versus actual usage. Features that users pay for but do not use represent both pricing vulnerability (they’ll downgrade when they notice) and product opportunity (can you activate those features?).

43. “If the price doubled tomorrow, what would you do?”

Tests price elasticity through scenario framing. “I’d cancel immediately” signals high price sensitivity. “I’d be annoyed but I’d stay” signals strong value lock-in. “I’d need to get approval from my boss” reveals the organizational decision layer.

44. “How does the cost of [Product] compare to the cost of the problem it solves?”

Frames pricing against value rather than against competitors. If users perceive the problem as a $100K problem and the product as a $10K solution, you have pricing power. If they perceive the problem as a $5K annoyance and the product as a $10K expense, you have a positioning problem.

45. “Would you rather pay per seat, per usage, or a flat fee? Why?”

Surfaces pricing model preferences with reasoning. The “why” reveals how users think about value — seat-based pricing makes sense for collaborative tools, usage-based for variable workloads, flat-fee for predictability in budgeting.

46. “What other tools in this range do you pay for? How does the value compare?”

Anchors your pricing against the user’s actual spending context. If they pay $50/month for Slack and $200/month for Salesforce, your $100/month pricing sits in a specific mental frame. The comparison reveals where users slot you in their budget hierarchy.

47. “If we offered a 20% discount for annual payment, would that change your plan choice?”

Tests discount sensitivity and commitment willingness. Strong willingness to commit annually signals high retention confidence. Reluctance signals uncertainty about long-term value.

48. “What feature would make you upgrade to the next tier?”

Identifies the expansion trigger — the specific capability that creates upgrade motivation. This informs packaging decisions: the feature users name should be prominent in the next tier’s value proposition.

49. “How did the free trial influence your purchase decision?”

Evaluates trial-to-paid conversion dynamics. Did the trial demonstrate value, or did it run out before the user reached the aha moment? Did the trial limitations feel fair, or did they create frustration that poisoned the purchase decision?

50. “If you were explaining to your finance team why you need this tool, what would you say?”

Captures the internal justification narrative. This is the business case the buyer will actually use. If they cannot articulate it clearly, your product either lacks clear ROI or your positioning does not equip users with the language to justify the purchase.

Expansion and Power User Research (Questions 51-60)


Expansion research identifies what drives users from casual adoption to deep engagement, and what makes champions evangelize your product internally.

51. “How has your use of [Product] changed since you first started?”

Maps the adoption curve. Users who describe expanding use cases are growing accounts. Users who describe narrowing use cases are contraction risks. The specific expansion pattern reveals which features drive deep adoption.

52. “What workarounds have you built around [Product] that we should know about?”

Surfaces the highest-signal feature candidates. Workarounds represent problems users care enough about to invest their own time solving. Each workaround is a feature request backed by behavioral evidence.

53. “Who at your company uses [Product] that you didn’t expect?”

Reveals organic expansion patterns. Unexpected users represent untapped segments. If marketing found value in a product designed for product teams, there is an expansion opportunity your go-to-market has not captured.

54. “What would you need from [Product] to consolidate one of your other tools?”

Identifies platform expansion opportunities. Users who would consolidate tools into yours represent the highest-value expansion — they increase spend while reducing competitive surface area.

55. “How do you train new team members on [Product]?”

Surfaces onboarding friction from the expansion perspective. If training new users is painful, it throttles adoption within the account. Products that spread easily within organizations retain better because switching costs increase with each new user.

56. “What data from [Product] do you export or report on regularly?”

Identifies the outputs users care about most. Reports and exports reveal the metrics and insights that drive decisions. These are your product’s actual value deliverables, which may differ from what you think you are selling.

57. “If your team grew by 50% next year, would [Product] scale with you?”

Tests scalability perception. If users express doubt, they are already considering alternatives for their next growth stage. Addressing scalability concerns preemptively is a retention strategy for growing accounts.

58. “What is the most impressive thing you’ve accomplished using [Product]?”

Captures success stories and champion narratives. These responses become case study material, testimonial content, and proof points. Champions who articulate clear wins are your best retention insurance and expansion advocates.

59. “How would your work be different if you didn’t have [Product]?”

Measures perceived indispensability. Strong answers (“I’d need two more headcount”) indicate deep value lock-in. Weak answers (“I’d figure something out”) indicate shallow adoption that is vulnerable to competitive displacement.

60. “What advice would you give our product team about where to invest next?”

Opens the floor for strategic input. Power users often have insights about market direction, competitive dynamics, and unserved needs that go beyond specific feature requests. Treat this as market intelligence, not just product feedback.

How Do You Use These Questions in AI-Moderated Interviews?


These 60 questions are starting points for AI-moderated conversations that go 5-7 levels deep. The AI moderator uses your selected questions as the discussion guide, then generates contextual follow-ups based on each participant’s responses.

For a typical study:

  1. Select 8-12 questions from the relevant section based on your research objective
  2. Define the target segment (churned customers, power users, trial abandonments, etc.)
  3. Launch the study — interviews begin within hours and complete in 48-72 hours
  4. Review themes in User Intuition’s Intelligence Hub — patterns across responses, indexed by question and participant segment

A 50-interview study using these questions costs approximately $1,000-$2,500 total (including participant incentives) and delivers in 72 hours. That is the cost of a single team dinner for research that informs a quarter’s worth of product decisions.

The questions do not change. The depth of answers — and the decisions they inform — is what separates research at scale from ad hoc conversations.

Frequently Asked Questions

A focused study needs 8-12 primary questions, each with 2-3 follow-up probes. AI-moderated interviews adapt dynamically, so you design the initial questions and the AI generates contextual follow-ups through 5-7 levels of laddering. Cramming 30 questions into a single interview produces shallow answers across all of them.
User research questions explore motivation, context, and decision logic — why users do what they do. Usability testing questions evaluate interface effectiveness — can users complete specific tasks? User research reveals the 'why' behind behavior; usability testing reveals the 'how.' Both are valuable but answer different questions.
Yes, but indirectly. Instead of 'What do you think of Competitor X?' ask 'Walk me through the last time you evaluated alternatives to solve this problem.' This surfaces which competitors are in the consideration set, what evaluation criteria matter, and where your product falls short — without priming the participant with your competitive assumptions.
Replace hypothesis-confirming questions with open-ended exploration. Instead of 'Did you find the onboarding confusing?' ask 'Walk me through your first week with the product.' Instead of 'Would you pay more for Feature X?' ask 'How do you currently handle [the problem Feature X solves]?' AI moderation enforces non-leading question methodology consistently across hundreds of interviews.
Yes. These questions are designed for AI-moderated interview platforms where the AI uses them as starting points and generates contextual follow-ups based on participant responses. The AI probes 5-7 levels deep on each topic, surfacing motivations and context that scripted question lists miss.
The most revealing churn questions reconstruct the decision timeline: 'When did you first consider canceling?' followed by 'What was happening with the product at that point?' and 'What would have needed to change for you to stay?' This surfaces the actual trigger, the evaluation process, and the retention opportunity — not just the rationalized exit reason.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours