Quality assurance in AI-moderated research is about ensuring the right questions produce the right depth with the right participants. This checklist standardizes QA across all agency studies.
Stage 1: Pre-Launch QA (Before First Interview)
Discussion Guide Review
- Every core question is open-ended (no yes/no questions)
- Laddering probes are designed for 5-7 level depth
- Questions progress from broad to specific (funnel structure)
- No leading language (avoid “don’t you think…” or “wouldn’t you agree…”)
- Time allocation is realistic (6-10 core questions for 30 minutes)
- Category terminology matches participant language (not industry jargon)
Participant Screening Review
- Screening criteria match research objectives
- Quota targets are specified for key segments
- Disqualification criteria are clear (professional respondents, non-target demographics)
- Sample size is sufficient for analytical goals (50+ for pattern identification, 100+ for segment comparison)
Study Configuration
- Interview length matches guide complexity
- Stimulus materials (concepts, ads, packaging) are uploaded and display correctly
- White-label branding is configured and verified
Stage 2: Mid-Study QA (After First 10 Interviews)
Depth Assessment
- Review 5-10 transcripts for laddering depth
- Confirm 70%+ of interviews reach Level 4+ on primary questions
- Check for repetitive responses that suggest shallow probing
- Verify stimulus materials are being presented correctly
Participant Quality Check
- Average interview duration within expected range (25-35 min for 30-min studies)
- No duplicate participants
- Responses demonstrate genuine engagement (not copy-paste or minimal answers)
Decision Gate
- If depth is insufficient: revise discussion guide before continuing
- If participant quality is low: review screening criteria
- If both are satisfactory: continue to full sample
Stage 3: Post-Study QA (Before Client Delivery)
Findings Validation
- Every key finding is traceable to specific interview evidence
- Theme prevalence is quantified (% of participants expressing each theme)
- Minority patterns are captured (important insights often appear in 10-20% of interviews)
- No findings contradict the underlying interview data
Deliverable Review
- Strategic recommendations follow from evidence (no unsupported leaps)
- Client-facing language replaces platform-specific terminology
- White-label branding is correct throughout
- Methodology section accurately describes the study parameters
For the full guide on agency research delivery and interview question templates, see our agency resources. Visit User Intuition for agencies.