← Reference Deep-Dives Reference Deep-Dive · 3 min read

SaaS User Research Best Practices: 12 Rules for Product Teams

By Kevin, Founder & CEO

The 12 Rules


1. Interview Your Actual Customers

Generic panel participants do not represent your market. The best research uses your own customer base — paying users, churned accounts, lost prospects. These people have real experience with your product and their motivations directly inform your product decisions.

When your customer base is insufficient, use vetted panels with strict screening criteria: company size, role, purchase involvement, and product category experience.

2. One Question Per Study

Every study should answer one research question. “Why do enterprise customers churn?” is one study. “What features should we build?” is a different study. Combining them produces shallow answers across both topics.

At $200-$1,000 per study, the cost of running separate focused studies is negligible compared to the cost of a single unfocused study that produces ambiguous findings.

3. Minimum 20 Interviews Per Focused Question

The 5-interview trap produces anecdotes, not patterns. For thematic saturation on a single question, 20-30 interviews is the minimum. For segmented research, multiply by segments.

4. Ask Non-Leading Questions

Replace “Did you find the onboarding confusing?” with “Walk me through your first week with the product.” The first question tells the participant what you want to hear. The second surfaces what they actually experienced.

AI moderation enforces non-leading methodology by default — the AI follows a calibrated protocol that does not drift toward leading questions under time pressure.

5. Time Research to Arrive Before Decisions

Research that arrives after the feature ships is documentation, not evidence. Sprint-speed research (48-72 hours) ensures findings inform the decisions they were designed to support.

6. Include Disconfirming Participants

Do not only interview happy customers. Include churned users, lost prospects, and non-users. The most valuable insights often come from people who rejected your product — they reveal weaknesses your satisfied users cannot see.

7. Store Findings in Searchable Systems

Slide decks have a 90-day half-life. Intelligence Hub storage means findings from Q1 are searchable in Q4 — and connect to findings from every other study.

8. Build Continuous Programs, Not One-Off Studies

SaaS markets change monthly. Research from 6 months ago may reflect a market that no longer exists. Continuous research programs detect shifts in real time.

9. Separate Measurement from Understanding

Surveys measure. Interviews understand. Use each for its strength. Do not expect surveys to explain why — and do not expect interviews to produce representative percentages without adequate sample sizes.

10. Pre-Register Your Hypothesis

Before launching the study, document what you believe and what evidence would change your mind. This prevents post-hoc rationalization where any finding can be interpreted as confirmation.

11. Close the Loop

Track whether research findings lead to product changes, and whether those changes improve the target metric. Research without follow-through is overhead. Research with closed-loop measurement is infrastructure.

12. Make Research Accessible to Everyone

Research hoarded by one team or one researcher does not compound. Make findings searchable by product, design, engineering, marketing, and CS. The PM who did not commission the study may benefit most from its findings.

These 12 practices are not aspirational. They are operational requirements for research that consistently influences SaaS product decisions. Teams that follow them build a compounding intelligence advantage. Teams that skip them produce research that gets filed and forgotten.

Frequently Asked Questions

For exploratory qualitative work, 10-20 interviews typically produce reliable thematic patterns for a homogeneous user group. For studies that require segment-level comparisons — by role, company size, or use case — you need enough interviews within each segment to see patterns, usually 8-12 per group. Running fewer than 5 interviews in a segment is rarely sufficient to distinguish signal from individual variation.
Confirmation bias enters research at three points: participant selection (recruiting advocates rather than a representative sample), question design (leading questions that prime users toward expected answers), and synthesis (cherry-picking quotes that support the team's existing hypothesis). The most effective countermeasure is recruiting participants through an independent panel rather than your own CRM, using open-ended behavioral questions rather than direct opinion questions, and having someone outside the immediate product team participate in or review the synthesis.
Research that lands after a feature is already scoped rarely changes decisions — it becomes a post-hoc justification exercise rather than a genuine input. Effective timing means starting research early enough in the planning cycle that findings can reshape priorities, not just validate them. Concretely, this means initiating discovery research at the beginning of a planning quarter, not after roadmap commitments are made.
The most common reason SaaS teams shortcut research best practices — smaller sample sizes, internal participant recruiting, rushed synthesis — is time pressure. User Intuition removes the bottleneck by conducting AI-moderated interviews at scale in 48-72 hours, drawing from a 4M+ panel so recruiting bias is minimized. This makes it operationally feasible to run properly-sized studies within sprint timelines rather than cutting corners to ship on schedule.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours