Tool Categories for SaaS Research
SaaS user research tools fall into four categories, each serving different research needs:
1. AI-Moderated Interview Platforms
Purpose: Deep qualitative research — understanding motivations, decision-making, and the “why” behind user behavior.
Best for: Churn diagnosis, win-loss analysis, feature validation, competitive intelligence, pricing research, onboarding research.
Speed: 48-72 hours from launch to synthesized findings.
Example: User Intuition — 30+ minute AI-moderated conversations with 5-7 level laddering, $20/interview, searchable Intelligence Hub.
When to use: Whenever you need to understand why users do what they do. This covers 60-80% of SaaS research needs.
2. Usability Testing Platforms
Purpose: Observing how users interact with interfaces — task completion, navigation paths, friction points.
Best for: Prototype testing, onboarding flow evaluation, navigation testing, A/B test follow-up.
Speed: 1-3 days for unmoderated; 1-2 weeks for moderated.
Examples: Maze, UserTesting, Lookback.
When to use: When you need to see how users interact with a specific interface. Shows what users do, not why.
3. Survey and Feedback Platforms
Purpose: Quantitative measurement at scale — satisfaction scores, feature preferences, NPS.
Best for: NPS tracking, satisfaction measurement, market sizing, quantitative validation of qualitative findings.
Speed: 1-2 weeks for panel-based; real-time for in-app.
Examples: Sprig (in-app), Qualtrics (enterprise), Typeform (lightweight), Hotjar (behavioral + feedback).
When to use: When you need to measure how many, not understand why. Surveys quantify; they do not explain.
4. Research Repositories
Purpose: Storing, tagging, and analyzing research from multiple sources.
Best for: Cross-study pattern analysis, research democratization, team knowledge management.
Speed: N/A — repositories organize data collected elsewhere.
Examples: Dovetail, Condens, EnjoyHQ.
When to use: When you have research data scattered across tools and need centralized analysis. Note: platforms with built-in intelligence hubs (like User Intuition) reduce the need for separate repositories.
Comparison Matrix
| Tool | Type | Cost | Speed | Depth | Persistence |
|---|---|---|---|---|---|
| User Intuition | AI interviews | $20/interview | 48-72 hrs | High (5-7 levels) | Intelligence Hub |
| Maze | Usability testing | $15K+/yr (Business) | 1-2 weeks | Medium (task-based) | Project reports |
| UserTesting | Usability testing | $20K-$50K+/yr | 1-3 days | Medium (video + tasks) | Video clips |
| Sprig | In-app surveys | $10K+/yr | Real-time | Low (1-3 questions) | Dashboards |
| Hotjar | Behavioral analytics | $39+/mo | Real-time | Low (behavioral only) | Session archive |
| Qualtrics | Enterprise surveys | $25K-$100K+/yr | 1-4 weeks | Low-medium (surveys) | Analytics |
| Dovetail | Repository | $29+/user/mo | N/A | N/A (analysis only) | Strong |
The Recommended Stack by Stage
Seed / Early stage: AI-moderated interviews only. Covers the critical research questions (PMF validation, churn, feature decisions) at minimal cost. Budget: $4K-$8K/year.
Growth stage: AI-moderated interviews + usability testing tool. Add Maze or similar for prototype testing as the product matures. Budget: $12K-$24K/year.
Scale stage: AI-moderated interviews + usability testing + enterprise survey tool. Add Qualtrics or similar for large-scale quantitative programs. Budget: $30K-$60K/year.
For the detailed platform comparison with platform-by-platform analysis and decision framework, see the full guide.