← Reference Deep-Dives Reference Deep-Dive · 2 min read

Agency Research Quality Assurance Checklist

By Kevin, Founder & CEO

Quality assurance in AI-moderated research is about ensuring the right questions produce the right depth with the right participants. This checklist standardizes QA across all agency studies.

Stage 1: Pre-Launch QA (Before First Interview)


Discussion Guide Review

  • Every core question is open-ended (no yes/no questions)
  • Laddering probes are designed for 5-7 level depth
  • Questions progress from broad to specific (funnel structure)
  • No leading language (avoid “don’t you think…” or “wouldn’t you agree…”)
  • Time allocation is realistic (6-10 core questions for 30 minutes)
  • Category terminology matches participant language (not industry jargon)

Participant Screening Review

  • Screening criteria match research objectives
  • Quota targets are specified for key segments
  • Disqualification criteria are clear (professional respondents, non-target demographics)
  • Sample size is sufficient for analytical goals (50+ for pattern identification, 100+ for segment comparison)

Study Configuration

  • Interview length matches guide complexity
  • Stimulus materials (concepts, ads, packaging) are uploaded and display correctly
  • White-label branding is configured and verified

Stage 2: Mid-Study QA (After First 10 Interviews)


Depth Assessment

  • Review 5-10 transcripts for laddering depth
  • Confirm 70%+ of interviews reach Level 4+ on primary questions
  • Check for repetitive responses that suggest shallow probing
  • Verify stimulus materials are being presented correctly

Participant Quality Check

  • Average interview duration within expected range (25-35 min for 30-min studies)
  • No duplicate participants
  • Responses demonstrate genuine engagement (not copy-paste or minimal answers)

Decision Gate

  • If depth is insufficient: revise discussion guide before continuing
  • If participant quality is low: review screening criteria
  • If both are satisfactory: continue to full sample

Stage 3: Post-Study QA (Before Client Delivery)


Findings Validation

  • Every key finding is traceable to specific interview evidence
  • Theme prevalence is quantified (% of participants expressing each theme)
  • Minority patterns are captured (important insights often appear in 10-20% of interviews)
  • No findings contradict the underlying interview data

Deliverable Review

  • Strategic recommendations follow from evidence (no unsupported leaps)
  • Client-facing language replaces platform-specific terminology
  • White-label branding is correct throughout
  • Methodology section accurately describes the study parameters

For the full guide on agency research delivery and interview question templates, see our agency resources. Visit User Intuition for agencies.

Frequently Asked Questions

Pre-launch QA covers discussion guide effectiveness (are questions open-ended and designed for laddering), screening criteria accuracy (does the screener identify the right participants), and technical configuration (is the AI moderator calibrated to the study's depth requirements). Catching guide or screening problems before fieldwork begins is far less costly than correcting them mid-study.
Mid-study QA after the first ten interviews validates that laddering is achieving the expected depth, that the participant pool matches the target audience profile, and that early thematic patterns are coherent rather than scattered. Issues identified at this stage can be corrected before the majority of fieldwork completes — not after the client deliverable is due.
Post-study QA includes verifying thematic consistency across all interviews (not just reviewing outlier transcripts), confirming that all discussion guide sections achieved adequate coverage, and validating that quoted verbatims support the analytical claims made in the report. This stage is where synthesis integrity is checked, not just data completeness.
Traditional QA focuses on individual moderator performance — did the human moderator probe effectively, follow the guide, avoid leading questions. AI-moderated QA shifts focus to system-level consistency: is the discussion guide generating productive conversations, are screening criteria producing the right participants, and are thematic patterns emerging with the expected coherence across all interviews rather than a subset.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours