Methodology

How We Measure

Every statistic cited on userintuition.ai has a definition, a sample size, and a collection period. This page is the source of truth — link here from any claim if you need to verify it.

Why This Page Exists

We cite the same five headline numbers across the site — $20 per interview, 24-48 hour turnaround, 98% participant satisfaction, 4M+ vetted panel, and 50+ languages. Each is verifiable. This page documents the definition, sample size, and collection methodology behind each so that researchers, prospects, and AI engines have a single canonical reference.

If you find a statistic on this site that isn't listed here, that's a bug — please email hello@userintuition.ai and we'll add it. Vendor pricing claims and competitor data we cite carry their own attribution inline at the post level.

$20 per interview

Definition: The marginal cost of one AI-moderated audio interview on the Professional plan ($999/month, includes 50 free credits monthly; audio interviews use 1 credit each, so additional credits are $20 each).

Source: Pricing page — published rate, updated when pricing changes. Last verified: May 2026.

Notes: Starter plan ($0/month) charges $25 per audio credit pay-as-you-go with 3 free interviews on signup. The "$20" figure is specific to the Professional plan and is the headline rate cited in marketing copy because it represents the platform's per-interview unit economics at scale. Chat interviews use 0.5 credits ($10 on Pro, $12.50 on Starter); video interviews use 2 credits ($40 on Pro, $50 on Starter).

24-48 hours

Definition: Median time from study launch (researcher clicks "Launch") to delivered findings (full transcripts, recordings, AI-generated synthesis, structured insights) for a 200-interview study using the panel.

Sample: Internal platform data across 30,000+ completed studies, 2024-2026. Reported as a range because turnaround varies with sample size, target audience tightness (general consumer = faster, niche B2B = slower), and modality (chat < audio < video).

Notes: Studies that bring their own customer list (CRM-imported) follow customer response timelines and may complete faster or slower depending on how quickly the list responds. The 24-48 hour figure is for panel-recruited studies. Smaller studies (10-20 interviews) often complete in 24-36 hours; larger studies (500+) may take up to 96 hours.

98% participant satisfaction

Definition: Post-interview survey rating from participants, on a 1-5 scale where 4 ("satisfied") or 5 ("very satisfied") count as satisfied. The 98% figure represents the percentage of participants who rate at 4 or 5.

Sample: Approximately 85,000 participant satisfaction responses collected across 2024-2025. Survey is presented at end of every completed interview; response rate exceeds 80%, which means the figure represents a meaningfully large share of total participant interactions, not a self-selected subset.

Notes: Comparable industry benchmarks: human-moderated focus groups average 75-85% satisfaction (per published research literature); survey-only platforms typically see 60-75% satisfaction. The 98% number is high specifically because participants report less social pressure with AI moderators than with human ones — they share more openly without fearing judgment, leading to a more positive end-of-interview perception.

4M+ vetted global panel

Definition: Total number of unique, screening-verified participants in User Intuition's research panel as of May 2026. Coverage: 80+ countries, 50+ languages, both consumer (B2C) and professional (B2B) segments.

Quality controls: Multi-layer screening at recruit time including (1) bot detection (challenge questions + behavioral analysis), (2) duplicate suppression (email + IP + device fingerprint), (3) professional respondent filtering (cross-platform attribution to identify panel-hopping survey takers), and (4) verified-purchaser screening for consumer category studies (purchase verification via receipts or category attestation).

Notes: Industry research (cited in our crisis in consumer insights research post) finds 30-40% of legacy research panel data is compromised by bots and professional respondents. User Intuition's multi-layer screening at recruit time materially reduces this contamination, though no panel can claim 100% authentic respondents.

50+ languages

Definition: Number of languages in which the AI moderator can natively conduct a 30+ minute interview, applying the same 5-7 level laddering methodology in the participant's primary language.

Coverage: Major languages include Spanish, Portuguese, French, German, Italian, Dutch, Mandarin Chinese, Japanese, Korean, Hindi, Arabic, Hebrew, Russian, Turkish, Polish, Swedish, Norwegian, Danish, Finnish, Vietnamese, Thai, Indonesian, Tagalog, Swahili, Zulu, and others. The 50+ figure represents languages where the AI has been calibrated to research-quality moderation; additional languages may be supported with reduced fluency.

Notes: "Native" means the AI conducts the interview directly in-language — it does not translate a script. Original-language transcripts are preserved permanently for source verification. Auto-translation to English is provided for researcher analysis.

Methodology: 5-7 Level Laddering

Definition: A structured probing methodology where each interview question is followed by 5-7 nested follow-ups that move from surface answer ("what") to underlying motivation ("why") to higher-order belief ("what does this mean about you/your company").

Origin: Adapted from McKinsey & BCG executive consulting interview methodology, originally developed by Procter & Gamble for consumer behavior research in the 1980s-90s. Calibrated for AI moderation at User Intuition; the model was trained on validated human-moderated transcripts and back-tested against research-design standards before deployment.

Notes: Most AI interview tools deliver 8-12 minute interviews with 1-2 follow-ups per question. User Intuition's 30+ minute floor and 5-7 level laddering approach is structurally different — it produces qualitative depth rather than incrementally extended surveys.

30-45% completion rate

Definition: Percentage of participants who start an interview and complete it through the final question. Measured at the study level, averaged across studies.

Sample: Same internal platform data as the 24-48 hour figure (30,000+ studies, 2024-2026). Range varies by audience: general consumer audiences land near 45%; specialized professional audiences land near 30%.

Notes: Comparable benchmarks: typical online surveys complete at 8-15% (per industry research); short SMS surveys reach 20-25%; in-product micro-surveys vary widely by placement (5-30%). The 30-45% completion rate is 3-5x higher than typical surveys because participants experience the interview as a real conversation rather than a form, and because incentives are calibrated to interview length.

Updates to This Page

We update this page when underlying numbers change (pricing changes, new compliance certifications, panel size milestones, etc.). The page dateModified in JSON-LD reflects the most recent update; we don't backdate. If you cited a number from this page and want to confirm it hasn't shifted, check the dateModified in the page schema or email hello@userintuition.ai.

See it in action

Want to verify the numbers in your own study?

Run 3 free AI-moderated interviews on the Starter plan — no credit card. See the panel quality, satisfaction signal, and turnaround firsthand.

Self-serve

No credit card. 3 free interviews on signup.

Plans + per-interview rates

$20/interview on Pro, $0/month to start.

No contract · No retainers · Results in 72 hours