The 12 Rules
1. Interview Your Actual Customers
Generic panel participants do not represent your market. The best research uses your own customer base — paying users, churned accounts, lost prospects. These people have real experience with your product and their motivations directly inform your product decisions.
When your customer base is insufficient, use vetted panels with strict screening criteria: company size, role, purchase involvement, and product category experience.
2. One Question Per Study
Every study should answer one research question. “Why do enterprise customers churn?” is one study. “What features should we build?” is a different study. Combining them produces shallow answers across both topics.
At $200-$1,000 per study, the cost of running separate focused studies is negligible compared to the cost of a single unfocused study that produces ambiguous findings.
3. Minimum 20 Interviews Per Focused Question
The 5-interview trap produces anecdotes, not patterns. For thematic saturation on a single question, 20-30 interviews is the minimum. For segmented research, multiply by segments.
4. Ask Non-Leading Questions
Replace “Did you find the onboarding confusing?” with “Walk me through your first week with the product.” The first question tells the participant what you want to hear. The second surfaces what they actually experienced.
AI moderation enforces non-leading methodology by default — the AI follows a calibrated protocol that does not drift toward leading questions under time pressure.
5. Time Research to Arrive Before Decisions
Research that arrives after the feature ships is documentation, not evidence. Sprint-speed research (48-72 hours) ensures findings inform the decisions they were designed to support.
6. Include Disconfirming Participants
Do not only interview happy customers. Include churned users, lost prospects, and non-users. The most valuable insights often come from people who rejected your product — they reveal weaknesses your satisfied users cannot see.
7. Store Findings in Searchable Systems
Slide decks have a 90-day half-life. Intelligence Hub storage means findings from Q1 are searchable in Q4 — and connect to findings from every other study.
8. Build Continuous Programs, Not One-Off Studies
SaaS markets change monthly. Research from 6 months ago may reflect a market that no longer exists. Continuous research programs detect shifts in real time.
9. Separate Measurement from Understanding
Surveys measure. Interviews understand. Use each for its strength. Do not expect surveys to explain why — and do not expect interviews to produce representative percentages without adequate sample sizes.
10. Pre-Register Your Hypothesis
Before launching the study, document what you believe and what evidence would change your mind. This prevents post-hoc rationalization where any finding can be interpreted as confirmation.
11. Close the Loop
Track whether research findings lead to product changes, and whether those changes improve the target metric. Research without follow-through is overhead. Research with closed-loop measurement is infrastructure.
12. Make Research Accessible to Everyone
Research hoarded by one team or one researcher does not compound. Make findings searchable by product, design, engineering, marketing, and CS. The PM who did not commission the study may benefit most from its findings.
These 12 practices are not aspirational. They are operational requirements for research that consistently influences SaaS product decisions. Teams that follow them build a compounding intelligence advantage. Teams that skip them produce research that gets filed and forgotten.