← Reference Deep-Dives Reference Deep-Dive · 3 min read

How to Design an AI Interview Discussion Guide

By Kevin, Founder & CEO

The discussion guide is the single highest-leverage input in an AI interview study. A good guide unlocks 5-7 levels of probing depth. A bad guide constrains the AI to survey-level responses regardless of the platform’s capability.

This reference guide covers the design principles, structural framework, and templates for discussion guides that get the most out of AI-moderated interviews.

Design Principles


1. Start Behavioral, Not Attitudinal

Wrong: “How do you feel about our onboarding process?” Right: “Walk me through your first week using the product. What happened?”

Behavioral questions ground the conversation in specific experiences. The AI moderator can then probe the emotional and attitudinal layers — but it needs a concrete foundation to ladder from.

2. Limit Topics to Enable Depth

A 30-minute AI interview supports 3-5 topics with genuine laddering depth. Each topic needs 5-10 minutes for the AI to reach levels 5-7. Attempting to cover 10 topics produces 3-minute exchanges that never get past level 2.

3. Design Opening Questions as Launching Pads

The AI will generate its own follow-up probes. Your opening questions should create conditions for exploration:

  • Open-ended — no yes/no answers possible
  • Specific — grounded in a moment, decision, or experience
  • Non-leading — genuinely curious, not confirmatory

4. Include Permission-Giving Language

Participants give deeper, more honest responses when they feel safe. Include prompts like:

  • “There are no wrong answers — I’m genuinely interested in your experience”
  • “Feel free to share anything, including criticism”

Template: Churn Discussion Guide


Study objective: Understand the decision architecture behind customer churn — not just that they left, but the sequence of events, emotions, and alternatives that drove the decision.

Topic 1: The Decision Journey (10 min)

  • “Walk me through the journey from when you first started considering leaving to when you made the decision.”

Topic 2: Attempted Resolution (8 min)

  • “Before you decided to leave, what did you try to make it work? What happened with those attempts?”

Topic 3: The Alternative (7 min)

  • “What were you hoping would be different about the alternative you chose?”

Topic 4: Retrospective (5 min)

  • “Looking back, was there a point where you felt the relationship was still salvageable?”

Template: Win-Loss Discussion Guide


Topic 1: Trigger and Context (10 min)

  • “Take me back to the beginning — what triggered the search for a solution?”

Topic 2: Evaluation Process (8 min)

  • “Walk me through how you narrowed from many options to your final two or three.”

Topic 3: Decision Factors (7 min)

  • “What was the single most important factor in your final decision, and why did it matter so much?”

Topic 4: Retrospective (5 min)

  • “Looking back, how does the reality compare to what you expected?”

Template: Concept Testing Discussion Guide


Topic 1: Immediate Reaction (8 min)

  • “When you first saw this, what was your gut reaction — before you started thinking analytically?”

Topic 2: Relevance and Fit (8 min)

  • “Who in your organization would this be most useful for, and why them specifically?”

Topic 3: Barriers and Concerns (8 min)

  • “What would need to be true about your situation for you to actually purchase this?”

Topic 4: Competitive Frame (6 min)

  • “What does this remind you of from past experience — positively or negatively?”

Common Mistakes


Overscripting probes. Don’t write “If participant mentions X, ask Y.” The AI handles this dynamically and often pursues better threads than a pre-scripted guide anticipates.

Too many must-hit questions. Mandatory questions reduce the AI’s ability to follow unexpected insights. Limit must-hits to 2-3 per study.

Hypothetical framing. “Would you consider…” produces hypothetical answers. “When was the last time you…” produces experiential data.

For the complete methodology behind AI interview depth, see the pillar guide and laddering methodology deep-dive.

Frequently Asked Questions

AI discussion guides require more precise opening questions because the AI cannot ask spontaneous clarifying questions the way a skilled human moderator might. The opening question needs to surface enough behavioral specificity on its own to give the AI a clear thread to follow. Guides should limit topics to 3-5 maximum - more than that spreads the conversation too thin for the AI to probe deeply on any single area - and each topic should open with a behavioral prompt (what happened) rather than an attitudinal one (how do you feel about).
The most common mistake is designing for breadth rather than depth - writing 10-15 questions covering every aspect of the customer experience rather than 3-5 topic anchors with room for the AI to follow the respondent's thread. A second common mistake is writing leading questions that suggest the desired answer, which narrows the response space and prevents the AI from surfacing perspectives that contradict the team's assumptions. A third mistake is using attitudinal openings ('how satisfied are you with X?') that invite ratings rather than behavioral openings that invite stories.
Churn guides should open with the timeline of the customer's decision to leave rather than asking them to evaluate the product directly, which reduces defensiveness and surfaces the actual sequence of events. Win-loss guides need explicit questions about alternatives evaluated and evaluation criteria, which don't emerge naturally unless specifically prompted. Concept testing guides require showing or describing the concept before any evaluation questions, then separating believability probes from desirability probes because they predict different things and respond to different interventions.
User Intuition's AI moderator uses the discussion guide as a launching structure rather than a fixed script - it follows the respondent's thread when they surface something significant, probes for specificity when answers are vague, and returns to uncovered guide topics as the conversation warrants. Output is a structured transcript with thematic tags mapped to the guide's topic areas, which makes cross-interview synthesis faster because findings are already organized by research objective rather than requiring manual coding from unstructured transcripts.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours