← Insights & Guides · 21 min read

UX Research Plan Template: A Framework for Product and Design Teams

By Kevin, Founder & CEO

Most UX research plans exist as a Notion document with a title, a vague research goal, and a list of questions someone brainstormed in thirty minutes. They sit in the project folder. They help nobody.

The problem is not that teams don’t plan. It’s that they plan the wrong things. They specify what to study without specifying how to recruit the right participants, how to structure an interview guide that surfaces motivation rather than opinion, how to analyze transcripts systematically, or how to translate findings into decisions a product team will actually act on.

This template is a working framework. Each section maps to a specific failure mode in typical UX research execution — the research question that produces unusable data, the screener that fills a study with the wrong participants, the analysis that stays in a spreadsheet, the findings deck nobody reads. Work through it in order. A complete UX research plan built from this template will take you from question to insight in one sprint cycle, not six weeks.

Part 1: Research Question Scoping

The research question is the most important element in any UX research plan. It determines every downstream decision: which method you use, who you recruit, what you ask, and what counts as a successful study. Most teams treat it as a formality. It is not.

The research question template:

“We want to understand [specific behavior or motivation] among [participant profile] so that we can make a decision about [specific product or design decision].”

The three components matter individually. “Specific behavior or motivation” keeps the study focused on what users actually do and why — not what they think or feel abstractly. “Participant profile” forces you to define who can actually answer this question before you recruit. “Specific product or design decision” is the test: if there is no decision attached to the research, the study has no reason to exist.

What bad research questions look like:

“What do users think of our checkout?” This is too broad and has no decision attached. Users can think many things about checkout. What are you doing with their thoughts? Are you redesigning the flow, removing a step, testing a new payment option? Without a decision anchor, any finding is equally valid and therefore actionable on none.

“Is our onboarding confusing?” This is a leading question with yes/no framing. It also focuses on a product judgment rather than user behavior. When users say “yes, the onboarding is confusing,” you still know nothing about where the friction is, why it happens, or what they were trying to accomplish when they encountered it.

“Do users prefer the new dashboard layout?” This is a preference question that produces preference data — which tells you what users say they like, not what they will actually use or whether it solves a real problem. Preference research has its place. It is not UX research in the behavioral sense.

What good research questions look like:

“What prevents new users from completing account setup in their first session, and what reasoning do users who succeed use to push through friction points?” This question is specific, focused on behavior and motivation, and implies a clear decision: improve first-session setup completion.

“What triggers the moment a user decides to upgrade versus stay on the free plan, and what was happening in their work or workflow immediately before that decision?” This question targets a specific conversion behavior, identifies a participant profile (users who have upgraded or actively decided not to), and maps directly to pricing and upgrade flow decisions.

“When enterprise buyers evaluate [product category] vendors, what evidence do they rely on to distinguish between options that look similar on the surface?” This question targets a specific stage of the buyer journey, defines the participant (enterprise evaluators), and informs positioning and sales enablement decisions.

Research question checklist before moving forward:

  • Is the question specific enough that you will know what a successful study looks like?
  • Is it focused on behavior or motivation, not opinion or preference?
  • Is it connected to a specific product, design, or positioning decision?
  • Is it answerable only through qualitative research — or could analytics or a survey answer it faster?

That last item matters. If your research question is “what percentage of users abandon checkout at the payment step,” analytics answers that. Run analytics first. If analytics shows 65% abandon at payment and your research question becomes “why do users who reach the payment step abandon instead of completing, and what distinguishes completers from abandoners,” now you have a question that justifies qualitative research.

Part 2: Method Selection Decision Tree

Method selection is often treated as a default: teams that do research tend to do the same kind of research repeatedly, regardless of what the question actually requires. This produces studies that generate a lot of data without answering the question that motivated the study.

Work through this decision tree in order. The following matrix summarizes the match between research question type and recommended method:

Research Question TypeRecommended MethodDepthSpeedScaleTypical Cost
Why users do something (motivation)AI-moderated interviewsHigh48-72 hrs200-300+From $200
Whether users can do something (usability)Moderated/unmoderated usability testHigh1-2 weeks5-20$5K-$15K
How many users feel a certain waySurvey (after qualitative)LowHours-days1,000+$2K-$5K
How behavior changes over timeDiary studyHighWeeks10-30$10K-$25K
Environmental context and workflowsContextual inquiryVery highWeeks3-8$15K-$30K

Are you exploring why users do something? Use qualitative interviews — AI-moderated or human-moderated depending on scale and timeline requirements. Qualitative interviews are the right method when you need to understand motivation, decision logic, emotional context, or the sequence of events leading to a behavior. They are the wrong method when you already know the why and need to measure frequency.

Are you testing whether users can do something? Use usability testing — moderated for complex tasks where you need to probe in real time, unmoderated for faster feedback on discrete interaction patterns. Usability testing answers “can users accomplish this task” and “where does the interaction break down.” It does not reliably answer why those breakdowns happen or whether the task itself is the right task to optimize.

Do you need statistical frequency? Use a survey — but run qualitative research first to know what to measure. Surveys are powerful instruments for quantifying what qualitative research has already identified. A survey built without prior qualitative grounding tends to measure the wrong things with high precision. Run six to eight qualitative interviews first. Build the survey around what you find.

Do you need to track behavior change over time? Use a diary study. Diary studies are the right method for understanding behavior in natural context across days or weeks: habit formation, repeated task patterns, usage in different environments. They are resource-intensive and rarely necessary for most product and design questions.

Is in-person context critical to the research question? Contextual inquiry with a human moderator is the right method for studying physical space, environmental constraints, or interactions that cannot be meaningfully reconstructed in conversation. It requires significantly more logistics and cannot be scaled.

For most product and design questions — and for UX research at scale — the answer is AI-moderated qualitative interviews. They are the only method that simultaneously gives you motivational depth, the ability to probe and follow unexpected threads, and the scale to identify patterns across hundreds of participants. A traditional human-moderated study of twenty participants takes four to six weeks and costs $15,000 to $27,000. An AI-moderated study of the same scale completes in 48 to 72 hours at a fraction of the cost, with 30-minute depth interviews and five to seven levels of laddering per response.

When comparing approaches, enterprise platforms like UserTesting are optimized for usability testing at scale. If your research question is about motivation, decision logic, or the why behind behavior rather than task completion, a qualitative interview platform is the right fit. See our complete guide to UX research methods for deeper method comparison.

Part 3: Participant Criteria Template

Participant criteria is where most research plans underinvest. A vague screener fills your study with participants who may be perfectly willing to talk but cannot actually answer your research question. The result is rich, articulate data that is not relevant to the decision you are trying to make.

Screener design: the four elements

Primary criterion is the behavioral qualifier — the one thing a participant must have done recently to be eligible. Define it in behavioral terms, not demographic terms. Not “is a frequent online shopper” but “has purchased from a new-to-them online retailer in the past 30 days.” Not “uses project management software” but “manages a team that actively uses Asana or Jira for sprint planning, and has done so for at least three months.” The behavioral qualifier ensures participants have the lived experience your research question requires.

Secondary criteria add context and specificity. Experience level (new users, established users, power users), platform (mobile vs. desktop if that distinction matters), industry (for B2B), company size (for enterprise research), or frequency of relevant behavior. Keep secondary criteria to two or three. Every additional criterion narrows your pool and extends your recruitment timeline.

Exclusions protect data quality. Always exclude employees of your company and competitors. Exclude participants who have completed more than two paid research studies in the past month — these “professional respondents” know how to perform for research sessions and produce atypically coherent, polished responses that do not reflect typical user behavior. A multi-layer fraud prevention system handles this automatically at scale; for smaller studies, add explicit screener questions.

Segment targets matter when you are studying across user types. If your research question involves comparing behavior across segments — new users versus churned users, B2C versus B2B buyers, mobile-first versus desktop-primary — define your segment splits before recruitment. Decide whether you need equal representation across segments or proportional representation. Commit to that split before you start, because changing it mid-study invalidates comparison analysis.

Sample sizes by method:

Study TypeMinimum SampleRecommendedSaturation PointNotes
Qualitative interviews (theme identification)812-15~12 for tightly scoped questionsMore participants confirm themes; fewer may miss minority perspectives
AI-moderated (pattern distribution)2050-10030-50 per segment100-300 enables statistical theme distribution across segments
Usability testing (interface issues)55-8 per segment~8 surfaces 80% of issuesNielsen’s heuristic holds; larger samples find same issues more slowly
Mixed-method (qual + quant sizing)8 qual + 200 survey20 qual + 500 surveyQual informs survey designRun qualitative first to identify what to measure

For qualitative interviews focused on theme identification: eight to fifteen participants will surface the primary themes in a focused research question. Saturation (the point at which new interviews stop producing new themes) typically arrives around twelve for tightly scoped questions.

For AI-moderated studies focused on pattern distribution across segments: twenty to fifty participants gives you enough data to report theme frequency with confidence. One hundred to three hundred participants produces statistically meaningful theme distribution — you can say “62% of participants cited payment friction as the primary abandonment driver” with evidence rather than inference.

For usability testing focused on interface issues: five to eight participants per segment reliably surfaces 80% of usability issues. This is Nielsen’s heuristic and it holds. Running twenty participants through a usability test finds the same issues as running eight, more slowly.

Sourcing strategy:

Own customers from your CRM are the best source for motivation and behavior research on your own product. They have context, they have usage history, and their responses are grounded in actual experience with your product. CRM uploads directly into a research platform are the most efficient path.

Panel participants provide unbiased signal for competitive research, market landscape studies, or segments you do not have in your own customer base. A vetted global panel with multi-layer fraud prevention — bot detection, duplicate suppression, professional respondent filtering — ensures data quality at scale.

The recommended approach for most UX research studies is blended sourcing: 60% own customers, 40% panel. This gives you depth from users with product context alongside fresh signal from users who may approach the problem differently. Blended studies are particularly valuable for churn research, where you need both current and former customers in the same study.

Part 4: Interview Guide Template

An interview guide is not a questionnaire. A questionnaire is a fixed list of questions asked in a fixed order. An interview guide is a structured framework for conversation — it establishes the terrain you need to cover while leaving room for the moderator (human or AI) to follow threads that emerge in real time.

Opening (5 minutes)

The opening has two functions: building enough rapport that the participant will share candidly, and establishing the context frame so their responses are grounded in real experience rather than hypothetical opinion.

Warm-up: “Tell me a bit about your role and how you typically approach [product category] in your day-to-day work.” This surfaces context before any substantive questions and signals that you are interested in their actual experience, not a performance.

Context setter: “When did you first start using [product type]? What was going on at the time that made it relevant?” This establishes a concrete anchor in the participant’s actual history, which prevents the abstract hypothetical responses that plague poorly structured research sessions.

Do not explain the research purpose in detail at the opening. General context (“we’re doing research to better understand how people approach X”) is appropriate. Detailed framing of your hypotheses or the specific decisions you’re informing contaminates data by priming participants to respond to your framing rather than their own experience.

Core research questions (20-30 minutes)

These six question slots cover the behavioral terrain that produces actionable insight. Adapt the language to your specific context; keep the underlying logic.

Q1 — The behavior anchoring question: “Tell me about the last time you [specific relevant behavior]. When was that, and what were you trying to accomplish?” Anchoring in a specific recent instance grounds the conversation in real experience rather than generalized opinion. “The last time” forces recall of a concrete event. Participants who cannot recall a specific instance often do not have the experience your research requires — a signal worth noting.

Q2 — The context question: “Walk me through what you were trying to accomplish before you got to that point. What was the situation?” Context questions surface the upstream motivations and constraints that drive behavior. Users rarely do things in isolation; understanding the surrounding situation reveals why a behavior occurred rather than just that it occurred.

Q3 — The friction question: “Was there a moment in that process where things went differently than you expected, or where you had to stop and figure something out?” Friction questions uncover the specific points where experience breaks down. “Went differently than you expected” is non-leading — it does not assume friction existed or where it was. Participants who report smooth experiences are equally valuable data.

Q4 — The motivation question: “What made you decide to [key action at a critical decision point]? What was going through your mind?” Motivation questions probe the decision logic at inflection points. Combined with the behavior anchor and context questions, this produces the causal chain: situation → behavior → reasoning → outcome.

Q5 — The comparison question: “Have you tried other ways to accomplish this in the past? How did that compare?” Comparison questions surface the alternatives participants considered and reveal what distinguishes your product or approach from those alternatives in the participant’s actual experience — not in your positioning.

Q6 — The unmet need question: “If you could change one thing about how [task or process] works, what would it be? Not necessarily about our product specifically — just about how this whole thing works.” Widening the frame to the broader task rather than your specific product surfaces latent needs that product-focused questions miss. See our guide to UX research interview questions for a comprehensive bank of probes and follow-up questions organized by research objective.

Probing logic (use throughout)

Probing is the craft element that separates a research conversation from a survey administered out loud. Use these probes when a response is interesting but incomplete:

  • “Can you tell me more about that?” — General probe for elaboration. Use liberally.
  • “What do you mean by [specific word the participant used]?” — Clarifying probe. Critical when participants use evaluative language (“confusing,” “easy,” “frustrating”) without specifics.
  • “What happened next?” — Narrative continuation probe. Use when a sequence is incomplete.
  • “How did that make you feel?” — Emotional register probe. Use sparingly and strategically, not reflexively.
  • “Why was that important to you?” — Laddering probe. This is the five to seven level laddering methodology in action: each “why” answer surfaces a deeper layer of motivation. Use it three to five times in sequence to move from surface behavior to underlying values.
  • “Can you give me a specific example of that?” — Concretizing probe. Use when responses are abstract or hypothetical.
  • “What would have happened if you hadn’t done that?” — Counterfactual probe. Surfaces the stakes and motivations behind a decision.
  • “Is there anything else about that experience you want to make sure I understand?” — Completeness probe. Use at the end of any substantive topic before transitioning.

Closing (5 minutes)

Synthesis question: “We’ve covered a lot of ground. Is there anything else about [topic] that you want to make sure I understand — something we haven’t talked about that feels important to you?” This consistently surfaces material that participants were waiting to share but that didn’t fit naturally into earlier questions.

Permission: “May I follow up with you if I have additional questions after reviewing this conversation?” This keeps the door open for clarification on ambiguous responses without obligating the participant.

Part 5: Analysis Framework

Analysis is the step where most research value is either created or lost. Teams that conduct ten excellent interviews and analyze them poorly produce worse outcomes than teams that conduct six adequate interviews and analyze them rigorously. The analysis framework determines whether your data becomes insight or remains a folder of transcripts.

Step 1: Transcript review before coding

Before applying any coding framework, read three to five transcripts from beginning to end without taking notes or tagging anything. The purpose is calibration: you are developing a feel for the texture of the data, identifying the dominant themes before you formally code them, and catching anything unexpected that your research question did not anticipate. Teams that skip this step and go directly to coding tend to over-apply their pre-planned code frame and miss emergent themes.

Step 2: Initial code frame

Based on your transcript review and research question, create six to ten preliminary theme labels. These are working hypotheses, not fixed categories. A code frame for an onboarding friction study might include: “first-session abandonment triggers,” “help-seeking behavior,” “expectation mismatch,” “peer validation before continuing,” “success milestone clarity,” “return motivation.” AI-assisted analysis auto-generates an initial code frame based on semantic pattern detection across transcripts, which compresses this step from hours to minutes.

Step 3: Code application

Tag transcript segments with your preliminary codes. Critically: allow new codes to emerge. If you encounter a pattern that does not fit your preliminary frame, create a new code rather than forcing the content into an existing category. Forced-fit coding is the most common analysis error — it produces a tidy code distribution that misrepresents what participants actually said.

Apply codes at the segment level (a sentence or a few sentences), not at the response level. Responses often contain multiple themes. Segment-level coding preserves the granularity needed for accurate pattern analysis.

Step 4: Pattern identification

Group related codes into higher-order themes. Count frequency — how many participants surfaced each theme — but do not let frequency alone determine importance. A theme mentioned by four of twenty participants may be more strategically significant than one mentioned by sixteen, if those four represent a high-value segment or if the theme represents a critical failure mode rather than a general preference.

Rare but intense responses are often the most valuable signal in qualitative research. A participant who describes abandoning a product in vivid, specific detail is telling you something that aggregate frequency data cannot.

Step 5: Finding statements

Convert each theme into a finding statement using this structure: “[X percentage or count] of participants reported [specific behavior or experience] because [underlying motivation or reasoning]. Representative quote: ‘[verbatim from transcript].’”

The finding statement has three components. The evidence claim quantifies the pattern. The causal explanation gives it explanatory power. The verbatim quote makes it concrete and credible to stakeholders. Finding statements without verbatim evidence are assertions. Finding statements with verbatim evidence are findings.

For a comprehensive bank of interview questions to support your research across different study types, see our resource on UX research interview questions.

Step 6: Decision mapping

For each finding, answer one question: what specific product or design decision does this inform? If you cannot answer that question, either the finding is not actionable (which is possible — not every finding maps to an immediate decision), or your research question was too broad to connect naturally to decisions.

Decision mapping is the accountability step. It converts findings from interesting observations into inputs for a product process. A finding that “43% of new users experience confusion around the pricing page before converting” maps directly to a pricing page redesign decision. A finding that “users feel the product is premium” does not map to a specific decision without additional scoping.

Part 6: Stakeholder Reporting Template

Research that does not get read does not influence decisions. The most common reason research is not read is that it is formatted for the researcher’s process rather than the stakeholder’s consumption pattern. Product managers, designers, and executives read the first two pages and skip the rest. Build your reporting format around that reality.

The 3-slide async summary (for immediate consumption)

Slide 1 — Study overview: research question, method, participant count and profile, date fielded, and three bullet takeaways. The three bullets are not methodology notes — they are the three most important things a decision-maker needs to know right now. Write these for someone who will not look at slides 2 through 4.

Slides 2 through 4 — One finding per slide: a headline finding statement at the top (written as a declarative sentence about what is true, not a question or a vague observation), supporting evidence in two or three bullets, one verbatim quote in a callout box, and a single recommended action. The recommended action is the most important element on the slide. It answers “so what” and connects the research to the decision-making process.

The full report (for deep reference)

Executive summary in one paragraph. Include: the research question, method summary, number of participants, and the two or three most important findings with their implications. Write this paragraph last, after the full report is complete.

Methodology section: research question, method, participant criteria, recruitment approach, fieldwork dates, analysis approach. Keep it to half a page. Stakeholders who need to evaluate the rigor of your methodology will read it; most will not.

Key findings section: each finding formatted as finding statement, supporting evidence, verbatim quotes (two to three per finding), and implications for specific decisions. This is the core of the document and should receive proportionally more investment than the methodology section.

Appendix: full theme distribution table, participant overview by segment, interview guide used, and any additional verbatim quotes that did not fit in the main report but are worth preserving.

What to avoid:

Burying findings in methodology. If your document’s first ten pages are sample design tables and your findings appear on page eleven, you have written a research methods document, not a research insights document. Lead with findings.

Thirty-page PDF decks. These get forwarded once and never opened again. A three-slide async summary with a link to the full report serves stakeholders better in both the short and long run.

Presenting findings without a “so what.” Every finding should have an explicit implication or recommendation attached. Findings without implications are data points, not intelligence.

Starting with limitations. Researchers are trained to lead with caveats. Stakeholders interpret caveats as apologies. State your limitations clearly in the methodology section. Do not open your findings presentation with a list of reasons to discount what follows.

Part 7: Running the Plan Inside a Sprint Cycle

The most common objection to qualitative research in product organizations is timeline. “We don’t have six weeks.” The objection is valid when aimed at traditional human-moderated research. It does not apply to AI-moderated UX research at scale.

48-72 hour sprint-integrated timeline:

DayActivityTime RequiredOutput
Monday AMFinalize research question, configure screener, set up interview guide, launch study2-3 hours (experienced) / 4-5 hours (first study)Live study with participants self-scheduling
Monday-TuesdayInterviews run in parallel (AI-moderated, asynchronous)0 hours (automated)20-100+ completed 30-minute interviews
WednesdayAI-assisted analysis generates code frame; researcher reviews, refines, writes finding statements4-6 hoursThemed findings with verbatim evidence
ThursdayShare 3-slide async summary with product and design team30-60 minutesDecision-ready findings in planning discussion
FridaySprint planning with research on the table; define next week’s research questionIntegrated into planningResearch-informed sprint backlog

Monday morning: finalize your research question using the template in Part 1, configure your screener and participant criteria, set up your interview guide, and launch the study. This takes approximately two to three hours for a researcher who has done it before, four to five hours for a first study.

Monday through Tuesday: interviews run in parallel. AI-moderated sessions operate asynchronously — participants complete their 30-minute interview on their own schedule, with no researcher time required per session. One hundred interviews running simultaneously complete in the same wall-clock time as one interview.

Wednesday: analysis. AI-assisted analysis generates an initial code frame and theme clusters from transcript data. Researcher review, refinement, and finding statement development takes four to six hours for a twenty to fifty participant study.

Thursday: share findings with the product and design team. Present the three-slide async summary. Schedule a working session if decisions need to be made collaboratively.

Friday sprint planning: research findings are on the table when the sprint backlog is being prioritized. The product decision is informed. The research cycle is complete.

For contrast: traditional human-moderated research on the same question — recruiting participants, scheduling sessions across two weeks, conducting sessions one at a time, transcribing, coding manually, writing findings — takes four to six weeks and $15,000 to $27,000. The sprint has been and gone three times over by the time findings arrive.

The timeline argument for skipping research is an argument against the wrong kind of research. The right kind fits inside a single sprint.

Continuous Research: Beyond the One-Off Study

Running a UX research plan is not the same thing as building a research capability. Most organizations treat research as episodic: there is a question, a study is run, findings are presented, the Notion doc is closed. Within ninety days, 90% of those findings have been forgotten — buried in a deck that no one opens anymore, or lost when the researcher who ran the study leaves the team.

The organizations that compound insight over time operate differently. They run research continuously — not one large study per quarter, but a rolling cadence of focused studies that collectively build a permanent, searchable knowledge base. When a new product manager joins and asks “why did we make this decision,” the answer exists and is traceable to specific participant quotes. When competitive dynamics shift and the team wants to understand whether customer priorities have changed, there is a baseline to compare against.

This is what the Customer Intelligence Hub is built for: every conversation compounds into institutional memory. Findings are searchable across studies. Patterns that emerge over months of research are surfaced automatically. The research investment made in Q1 is still paying dividends in Q4, and the team that inherits the product roadmap in Q3 of next year can trace every decision back to the customer voice that informed it.

Continuous research also changes the nature of individual studies. When you have a knowledge base of prior research to draw on, you scope new studies more precisely. You already know the broad landscape; you are filling in a specific gap. Studies get tighter, faster, and more actionable because they are building on a foundation rather than starting from zero.

The shift from episodic to continuous research is not primarily a technology question. It is an operational question: can your research process fit inside a sprint cycle, every sprint, without becoming a bottleneck? AI-moderated research at scale makes that operationally viable for teams that previously had no budget or timeline for qualitative research. A study from $200, completed in 48 hours, conducted in parallel with normal sprint work — the economics change what is possible.

Conclusion

A UX research plan built from this template covers the six failure modes that turn research investments into shelf documents: a vague research question disconnected from any decision, the wrong method for the question being asked, a screener that fills the study with the wrong participants, an interview guide that produces opinion instead of behavior and motivation, analysis that stays in a spreadsheet, and a findings deck formatted for the researcher rather than the decision-maker.

Work through each section before you field a study. The investment is two to three hours. The cost of running a study that produces data you cannot use is measured in weeks and thousands of dollars — plus the organizational cost of research credibility that erodes when findings do not drive decisions.

If you are ready to run a study built on this framework, User Intuition’s UX research platform handles participant sourcing, AI-moderated interviewing at scale, and analysis — so the work you are doing with this plan is the strategic work: scoping the question, designing the guide, and turning findings into decisions.

Start here:

Frequently Asked Questions

A complete UX research plan includes: the research question (specific, actionable), method selection rationale, participant criteria and recruitment approach, interview guide with core questions and probing logic, analysis framework, timeline, and stakeholder deliverable format. Most plans are too vague on research questions and too light on analysis structure.
A UX research plan should be as long as it needs to be and no longer. For a focused qualitative study, 2-3 pages covering question, method, participants, guide structure, and timeline is enough. A plan that tries to cover all scenarios becomes a document nobody reads.
A good research question is specific, actionable, and focused on behavior or motivation — not on features. Bad: 'What do users think of our onboarding?' Better: 'What prevents new users from completing setup in their first session, and what motivated those who did complete it?' The better question defines what a successful study looks like.
Start with your own customers for motivation and behavior research — they have context. Use a vetted panel for unbiased perspectives, competitive research, or when you need specific segments you don't have in your CRM. For most studies, a blend of 60% own customers / 40% panel gives you both depth and fresh signal.
A screener is a short questionnaire used to qualify participants before they enter your study. It filters for the right behavioral profile (not just demographics): recency of behavior, experience level, platform usage, relevant context. A poor screener fills your study with participants who can't answer your core questions.
30-45 minutes is the minimum for qualitative depth research — anything shorter constrains the laddering needed to move from surface behavior to underlying motivation. Traditional human-moderated sessions often run 60-90 minutes. AI-moderated sessions maintain engagement and depth consistently within the 30-45 minute window.
Thematic coding is the process of tagging segments of interview transcripts with descriptive labels (themes), then grouping related codes to identify patterns. Manual thematic coding for 20 interviews takes 20-40 hours. AI-assisted analysis auto-generates initial code frames and surfaces pattern clusters, compressing analysis to hours.
The most effective UX research deliverables lead with findings (not methodology), use verbatim quotes as evidence, connect insights to specific product decisions, and include a clear 'so what' for each finding. A 3-slide summary with 3 key findings beats a 30-page report that nobody reads by Tuesday.
Research-mature teams run studies continuously — not as one-off projects. The cadence depends on team size and sprint velocity, but a monthly study rhythm (rotating research questions across features, segments, and stages) ensures insights always inform sprint planning without research becoming a bottleneck.
A research brief is the stakeholder-facing document stating what you're studying and why. A research plan is the operational document covering how: method, participants, guide, timeline, analysis approach, and deliverables. Both matter, but teams often write the brief and skip the plan — which is why studies go sideways during fieldwork.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours