Marketing research interview questions are most useful when organized by campaign phase — pre-launch, in-market, and post-campaign — rather than by generic research methodology. The 53 questions below are designed to surface the gap between what consumers say they will respond to and what actually changes their behavior, their perception, and their purchase decisions. Each question includes guidance on what to probe and what to listen for.
If you are building a research program for a marketing team, you have already discovered that most interview question banks are organized around methodology — awareness questions, perception questions, satisfaction questions. That structure serves researchers. It does not serve marketers who need to know whether a message will land before they spend the media budget.
Why Do Most Message Testing Questions Fail?
The majority of message testing research produces data that teams feel confident about and campaigns that underperform anyway. The problem is not the testing itself. It is the question design.
Three structural failures explain why most marketing research questions produce unreliable signal.
Leading language contaminates the response. When you ask “How compelling is this message?” you have already told the participant that the message is meant to be compelling. You will get a rating that reflects social desirability, not actual resonance. When you ask “How clear is this value proposition?” you have told them it is a value proposition — they will find clarity whether it exists or not. The question format primes the answer format.
Testing messages in isolation strips context. Consumers never encounter your headline in a blank white room. They encounter it while scrolling past seventeen other posts, during a commute, next to a competitor’s banner ad, three seconds after a text from their partner. When message testing removes all of that context, it measures comprehension in a vacuum — and comprehension in a vacuum predicts almost nothing about in-market performance.
Accepting the first answer as the finding. A participant says the message feels “trustworthy.” The moderator codes trustworthiness as a positive signal and moves to the next stimulus. But what does trustworthy mean to this specific person in this specific category? Does it mean the claim is believable, the brand has earned credibility, the design looks professional, or the language avoids hype? Without five to seven levels of probing, “trustworthy” is a placeholder that could mean any of those things — and each one implies a completely different creative strategy.
The stated-versus-actual resonance gap is one of the most documented problems in marketing research. Consumers tell you a message resonates. Then they scroll past the ad without stopping. The questions below are designed to close that gap by probing past stated reactions into the mechanisms that actually drive attention, memory, and action.
How to Use These Questions
Select 8-12 questions per interview. A 30-minute AI-moderated marketing interview cannot explore 53 questions at the depth required. Choose the questions that match your current campaign phase — pre-launch, in-market, or post-campaign — and commit to exploring each one deeply rather than covering more topics superficially.
Spend 60% of interview time on follow-up probes. The numbered questions below are entry points, not endpoints. The follow-up probing — whether conducted by a skilled human moderator or by AI moderation — is where the actual marketing intelligence lives. A single message testing question explored through five to seven levels of laddering will produce more actionable creative insight than ten surface-level reaction questions.
Sequence from broad context to specific evaluation to reflection. Begin with questions about how the participant encounters marketing in the category generally. Then introduce specific messages or campaigns for evaluation. Close with reflection questions that test what stuck after the conversation moved on. This mirrors how marketing actually works: consumers have a context, encounter a message, and either retain it or forget it.
Listen for the language gap. The most valuable signal in marketing research is the difference between the words your team uses to describe the product and the words consumers use to describe what they need. When a participant cannot play back your core message in their own words, you have a comprehension problem. When they play it back perfectly but show no emotional engagement, you have a relevance problem. When they engage emotionally but cannot articulate what to do next, you have a conversion problem. Each diagnosis requires a different creative response.
For a deeper framework on structuring your overall marketing research program, see the complete guide to AI research for marketing teams.
Pre-Launch Message Testing Questions
These questions evaluate whether your messaging communicates what you intend, resonates with the target audience, and differentiates from alternatives — before you commit media budget. Use them to test headlines, value propositions, campaign concepts, and creative direction.
1. “In your own words, what is this message trying to tell you?”
The gap between your intended message and the participant’s unprompted interpretation is the single most diagnostic signal in pre-launch testing. If they cannot articulate your core point without prompting, the message has a comprehension problem that no amount of media spend will fix.
2. “What is the first thing that comes to mind when you read this?”
First reactions reveal the dominant association the message creates. Listen for whether the first association is functional (what the product does), emotional (how it makes them feel), or contextual (a situation it reminds them of). Each pattern tells you which register your message is operating in.
3. “Is there anything in this message that feels hard to believe?”
Credibility is the silent killer of marketing messages. Consumers will not tell you a claim feels exaggerated unless you explicitly invite skepticism. The specific elements they flag as hard to believe are the claims that need proof points, third-party validation, or different framing.
4. “Who do you think this message is for? Is it for someone like you?”
Tests audience targeting accuracy at the message level. When participants say the message is “for someone younger” or “for people who care more about price,” they are telling you that the message does not match their self-concept — which means it will not motivate their behavior regardless of how well they comprehend it.
5. “After reading this, what would you want to know next?”
Reveals the information gap that the message creates. The follow-up question the participant asks is the question your landing page, ad sequence, or sales team needs to answer. If nobody asks a follow-up question, the message may be complete — or it may be forgettable.
Laddering example — Question 1:
Moderator: “In your own words, what is this message trying to tell you?” Participant: “It’s saying the product is easier to use than what’s out there.” Moderator: “What does ‘easier to use’ mean to you in this context?” Participant: “Less setup, I guess. Less hassle getting started.” Moderator: “What kind of hassle have you experienced with products like this?” Participant: “The last one I tried took two weeks before I actually got value from it. I almost gave up.” Moderator: “What kept you from giving up?” Participant: “My manager had already approved the budget. I would’ve looked bad.” Moderator: “So when this message says ‘easier,’ what would it need to prove to you?” Participant: “Show me someone like me who got results in the first week. That would be credible.”
That final answer — “show me someone like me who got results in the first week” — is a specific creative direction that no rating scale would have surfaced. The participant did not just confirm the message was clear; they told you exactly what proof point would make it credible for their situation.
6. “How is this different from what other brands in this category are saying?”
Tests differentiation at the message level. If participants cannot distinguish your message from competitors, your positioning has a distinctiveness problem. Listen for whether they compare on substance (different claim) or on tone (different voice, same claim).
7. “If you saw this message while scrolling on your phone, would you stop? Why or why not?”
Brings real-world context into the evaluation. Forcing participants to consider the competitive attention environment reveals whether the message has stopping power or just comprehension in a controlled setting.
8. “What emotion, if any, does this message create for you?”
Emotional resonance predicts memorability and action more reliably than rational comprehension. If participants report no emotional response, the message is informative but not motivating. Listen for the specific emotion — curiosity, relief, excitement, skepticism — because each one predicts different downstream behavior.
9. “Is there a word or phrase in this message that stands out — positively or negatively?”
Identifies the specific language that carries the most weight. Often, one word is doing all the work, or one word is undermining everything else. This micro-level feedback is impossible to capture with holistic rating scales.
10. “If you were going to describe this product to a friend based only on this message, what would you say?”
Tests message transfer — whether the core idea is portable enough that the participant can repeat it in their own language. If they cannot relay the message to someone else, it will not generate word of mouth, and it will not survive the attention decay curve from exposure to purchase.
Brand Perception and Health Questions
These questions measure how consumers perceive your brand right now, how that perception has shifted over time, and what drives consideration. They should be asked before introducing any specific campaign materials, to establish a clean baseline. For a comprehensive brand research methodology, see the brand health interview questions guide.
11. “Without me naming any brands, which companies come to mind when you think about [category]?”
Unaided awareness is the most predictive measure of brand health. The order in which brands are mentioned, the language used to describe them, and the brands that are forgotten all reveal the competitive mental map.
12. “What words or phrases come to mind when you think about [brand]?”
Open association mapping before any structured questioning. The first three to five words reveal the dominant brand perception — and whether that perception matches your intended positioning.
13. “Has your opinion of [brand] changed in the past year? What caused the shift?”
Captures perception trajectory and identifies the specific events, campaigns, or experiences that moved the needle. A brand that is gaining momentum requires different strategy than one that is stable or declining.
14. “When you are choosing between options in this category, at what point does [brand] enter your consideration?”
Maps the consideration funnel from the consumer’s perspective. Some brands enter consideration immediately; others only when the preferred option is unavailable or too expensive. The entry point reveals brand strength more accurately than stated preference.
15. “What would [brand] need to do to become your first choice?”
Identifies the specific barrier between consideration and preference. The answer is often surprisingly specific — a product feature, a price threshold, a distribution gap, a messaging change — and actionable in ways that abstract brand scores are not.
16. “Is there anything about [brand] that feels inconsistent with what they say about themselves?”
Surfaces the say-do gap that erodes brand trust over time. When consumers identify inconsistencies between brand messaging and brand behavior, those inconsistencies become vulnerability points that competitors can exploit.
17. “If [brand] disappeared tomorrow, what would you miss? What would you not miss?”
Tests brand indispensability. What consumers would miss reveals the functional and emotional value the brand uniquely provides. What they would not miss reveals the features and messaging that feel interchangeable with competitors.
18. “How does [brand] compare to what you expected before you first tried it?”
The expectation-reality gap drives word of mouth more reliably than satisfaction scores. Brands that exceed expectations build equity organically. Brands that overpromise and underdeliver create detractors who actively warn others.
19. “What kind of person do you picture as a typical [brand] customer?”
Reveals the brand’s perceived audience identity. When the perceived customer does not match the actual target audience, messaging is reaching the wrong mental frame, regardless of media targeting accuracy.
20. “How do you think [brand] compares to [competitor] — not in features, but in what they stand for?”
Elevates competitive comparison from product attributes to brand meaning. This question often reveals positioning gaps — the spaces between what your brand stands for and what competitors stand for where no brand has established ownership.
Campaign Evaluation Questions
These questions are designed for post-campaign research — understanding what consumers noticed, how they interpreted it, what they felt, and what they did as a result. Run these 2-4 weeks after campaign launch for maximum recall accuracy.
21. “Have you seen or heard any advertising for [category] recently? What do you remember?”
Unaided campaign recall must be captured before you mention your brand or campaign. What consumers remember without prompting is what actually broke through the noise. Everything else was seen but not retained.
22. “When you think about the ads you have seen recently for [category], which one stands out most? Why?”
Forces competitive recall comparison. The ad that stands out most is not always yours — and understanding why a competitor’s creative was more memorable is as valuable as understanding your own campaign’s performance.
23. “I am going to show you this ad. Have you seen it before? Where and when?”
Aided recall with context mapping. The “where and when” follow-up is critical — it tells you which channels are driving actual exposure and whether the campaign is reaching consumers in the intended context.
24. “What do you think this ad is trying to get you to do?”
Tests call-to-action clarity. If participants cannot identify the intended action — visit a website, try a product, change a perception — the campaign has a conversion architecture problem regardless of how well the creative performs on engagement metrics.
Laddering example — Question 24:
Moderator: “What do you think this ad is trying to get you to do?” Participant: “I think it wants me to check out the product.” Moderator: “Did you?” Participant: “No.” Moderator: “What stopped you?” Participant: “Honestly, I didn’t know where to go. The ad didn’t really tell me a next step.” Moderator: “If it had given you a clear next step, what would have been compelling enough for you to take it?” Participant: “Probably a free trial. I don’t want to commit money to something I’ve only seen in an ad.” Moderator: “What makes a free trial feel trustworthy versus gimmicky?” Participant: “No credit card required. If they ask for a card upfront, I assume they’re counting on me forgetting to cancel.”
That exchange moves from campaign recall to conversion barrier to offer design to trust mechanism in five probes. The insight — that the ad lacked a clear, low-commitment next step — is specific enough to inform the next creative iteration.
25. “How did this ad make you feel? Walk me through your reaction.”
Emotional response mapping in the participant’s own language. Listen for whether the dominant emotion is approach-oriented (curiosity, excitement, desire) or avoidance-oriented (skepticism, confusion, annoyance). Approach emotions predict engagement; avoidance emotions predict active avoidance.
26. “Did this ad change how you think about [brand] in any way?”
Measures perception shift attributable to the campaign. If the answer is no, the campaign achieved awareness without attitude change — which may or may not align with the campaign objective.
27. “After seeing this ad, did you do anything — search for the product, talk to someone about it, visit a store or website?”
Tracks behavioral response. The specific actions participants took (or did not take) reveal whether the campaign moved consumers along the funnel or merely created a momentary impression.
28. “What would you change about this ad to make it more relevant to you?”
Consumer-generated creative feedback that identifies the specific elements — imagery, language, offer, context — that would increase personal relevance. The changes participants suggest often reveal unmet needs the creative team did not consider.
29. “If you saw this ad five more times, how would your reaction change?”
Tests creative durability and early fatigue indicators. Messages that consumers say they would tire of quickly need rotation planning. Messages that they say would grow on them have longer shelf life.
30. “What is the one thing you will remember from this ad a week from now?”
Tests message stickiness. The single element that survives in memory is the true takeaway of the campaign — which may or may not be the intended takeaway. If it is not, the campaign has a salience mismatch that post-campaign media optimization cannot fix.
Audience Segmentation Questions
These questions identify the motivations, media habits, and decision triggers that differentiate audience segments. Use them to validate or challenge existing segmentation models with qualitative depth. For a full marketing research framework, see the marketing teams cost guide to understand how continuous segmentation research fits into budget planning.
31. “Walk me through how you typically decide to try something new in this category.”
Decision process mapping by segment. Some consumers are deal-driven, others are recommendation-driven, others are curiosity-driven. The process reveals the trigger that opens the consideration window — which is the moment your marketing needs to reach them.
32. “Where do you go first when you want to learn about a new product or brand?”
Identifies the primary information source by segment. Whether it is Google, TikTok, Reddit, a friend, or a physical store, the answer tells you where your media dollars should go first for this segment.
33. “What makes you trust an ad versus ignore it?”
Trust filters vary dramatically by segment. Some consumers trust data and specificity. Others trust social proof and peer endorsement. Others trust brand history and familiarity. The trust filter determines which creative approach will resonate with each audience.
34. “Describe the last purchase in this category that you felt really good about. What made it feel right?”
Positive purchase narratives reveal the emotional payoff each segment seeks. Some segments want validation that they made the smart choice. Others want the excitement of discovery. Others want the comfort of routine. The emotional payoff is the creative territory your messaging should occupy.
35. “How much research do you do before buying something in this category?”
Research intensity segments consumers into impulse buyers, deliberate researchers, and satisficers who research just enough to feel comfortable. Each segment requires different funnel architecture and different messaging cadence.
36. “What is the most important thing a brand can communicate to earn your first purchase?”
First-purchase drivers differ from repeat-purchase drivers. This question isolates the acquisition message by segment — the single most important claim, proof point, or emotional appeal that converts awareness into trial.
The most effective marketing research programs treat every study as an addition to a compounding intelligence base rather than a standalone project. When teams run continuous research across campaign phases — pre-launch testing, in-market tracking, post-campaign evaluation, and ongoing segmentation — each wave builds on what came before. The first study reveals which messages resonate and why. The second reveals which audience segments they resonate with and which they miss. The third reveals how competitive messaging shifts the landscape around you. By the fourth wave, the team is making campaign decisions with a depth of consumer understanding that no single study could ever produce on its own. This compounding effect is what separates marketing teams that make confident, evidence-based creative decisions from teams that commission annual research reports and then quietly return to guessing for the other eleven months of the year.
37. “Is there a type of marketing in this category that annoys you? What makes it annoying?”
Negative reaction mapping reveals the messaging approaches to avoid for each segment. The annoyance triggers — too aggressive, too vague, too frequent, too irrelevant — define the boundaries of effective communication by audience.
38. “When you see a product recommendation from someone you follow online, how do you react differently than when you see a brand’s own advertising?”
Measures influencer versus brand channel credibility by segment. The answer reveals whether earned media, paid media, or owned media carries the most weight for each audience — and what conditions shift that balance.
Competitive Messaging Questions
These questions explore how consumers perceive competitor messaging, where positioning gaps exist, and what your competition is communicating that you are not. For a framework on concept evaluation against competitive alternatives, see the concept testing questions guide.
39. “Which brand in this category do you think does the best job of explaining what they do? What makes their messaging work?”
Identifies the competitive messaging benchmark from the consumer’s perspective. The brand that communicates most clearly is not always the market leader — but their messaging approach reveals what consumers value in category communication.
40. “Is there a brand in this category whose ads you actively notice? What makes them stand out?”
Tests competitive creative cut-through. The ads consumers notice are the ones winning the attention competition — and understanding what makes them noticeable reveals the creative principles that drive visibility in your category.
41. “Have you seen any messaging from a competitor that made you reconsider your current choice? What did it say?”
Identifies competitive messaging that threatens your customer base. The specific messages that create reconsideration are the competitive claims your marketing needs to neutralize or surpass.
42. “If you were choosing between [brand] and [competitor] and could only read one sentence from each, what would you need that sentence to say?”
Forces articulation of the single most important competitive differentiator from the consumer’s perspective. The sentence they describe is the positioning claim that would resolve the decision — and it is often different from what either brand is currently communicating.
Laddering example — Question 42:
Moderator: “If you were choosing between these two brands and could only read one sentence from each, what would you need that sentence to say?” Participant: “I’d want to know which one actually works for people like me.” Moderator: “What does ‘people like me’ mean in this context?” Participant: “Small business. Limited time. No dedicated team for this.” Moderator: “And what would ‘actually works’ need to mean?” Participant: “Results in the first month, not six months of setup. I don’t have the runway to wait.” Moderator: “If a brand told you they could deliver results in the first month for small businesses, would you believe them?” Participant: “Only if they showed me a real example. A case study from a business my size, not an enterprise logo wall.”
The participant described the exact proof point structure — a small business case study with first-month results — that would resolve the competitive comparison. That specificity turns competitive research into creative brief material.
43. “What is something no brand in this category is saying that you wish someone would?”
Surfaces the unoccupied positioning territory in the category. The message consumers wish someone would deliver is the market gap your marketing can own — if you say it first and say it credibly.
44. “When a brand in this category makes a claim, what makes you believe it versus dismiss it?”
Reveals the credibility criteria by category. Some categories require data. Others require testimonials. Others require demonstrations. The credibility mechanism determines not just what to say but how to prove it.
45. “If you could take the best parts of two different brands’ messaging and combine them, what would that look like?”
Composite messaging construction reveals the ideal value proposition from the consumer’s perspective. The combination they create shows you which elements from your messaging and your competitors’ messaging carry the most weight.
46. “Is there a brand that used to resonate with you but does not anymore? What changed in their messaging?”
Tracks competitive messaging decay. Brands that lose resonance over time reveal the lifecycle of messaging strategies — and the signals that indicate when a positioning refresh is needed.
Creative Fatigue and Refresh Questions
These questions diagnose when messaging stops landing, what drives wear-out, and what audiences want next. Run these when campaign performance metrics start declining or when you are planning a creative refresh cycle.
47. “Have you noticed any ads in this category that you used to pay attention to but now skip? What changed?”
Tracks the specific fatigue trajectory. Whether the wear-out is frequency-driven (saw it too many times), relevance-driven (my needs changed), or novelty-driven (the format feels stale), each diagnosis points to a different creative response.
48. “Is there a message from [brand] that you feel like you have heard too many times? What is it?”
Identifies the specific message or claim that has reached saturation. Over-repeated messages do not just lose effectiveness — they actively create annoyance that transfers to brand perception.
49. “What would a brand in this category need to say or show you that would feel genuinely new?”
Consumer-defined novelty criteria reveal the creative directions that would recapture attention. Listen for whether consumers want new claims, new formats, new tones, or new proof — each one implies a different creative approach.
50. “When you think about the advertising you see most in this category, what feeling does it create now compared to when you first saw it?”
Emotional decay tracking. The shift from initial interest to current indifference (or annoyance) maps the emotional half-life of the creative approach — information that directly informs media planning and rotation strategy.
51. “If [brand] were going to surprise you with their next campaign, what would that look like?”
Invites consumers to imagine the creative direction they would find compelling. While consumers are not creative directors, the themes they describe — honesty, specificity, humor, empathy — reveal the emotional register the next campaign should target.
52. “Is there a brand outside of this category whose marketing you admire right now? What are they doing well?”
Cross-category creative inspiration from the consumer’s perspective. The brands consumers admire reveal the communication standards they apply to your category — standards that may be higher or more specific than your current creative approach assumes.
53. “What type of content from a brand in this category would you actually choose to spend time with?”
Tests the threshold between interruptive advertising and valuable content. The answer reveals whether the audience wants utility (how-to, comparison, education), entertainment (humor, storytelling), or validation (community, identity reinforcement) — and which content format best delivers it.
Moderator Mistakes That Undermine Marketing Research
Marketing research interviews fail when the moderator introduces the same biases the questions were designed to avoid. These five mistakes are specific to marketing research contexts and disproportionately affect message testing and campaign evaluation studies.
Mistake 1: Showing all creative concepts at once. When participants see three or four concepts simultaneously, they evaluate them comparatively rather than absolutely. The result is a ranking based on relative preference, not an assessment of whether any individual concept would actually stop them from scrolling. Show concepts one at a time, probe each fully, then introduce comparison only at the end.
Mistake 2: Revealing the brand before measuring reaction. The moment a participant knows which brand created the message, every reaction is filtered through their existing brand perception. A message from a trusted brand seems more credible. The same message from an unknown brand seems less convincing. Measure message reaction before brand attribution to isolate the pure creative signal.
Mistake 3: Using marketing jargon in questions. Asking consumers about “value propositions,” “brand positioning,” or “key differentiators” forces them to adopt a framework they do not naturally use. Consumers do not think in marketing language. Questions should use the consumer’s natural vocabulary — what does this tell you, how does this make you feel, what would you do next.
Mistake 4: Accepting metaphors without probing them. When a participant says a message feels “fresh” or “clean” or “powerful,” those words carry different meanings for different people. A moderator who nods and moves on has captured a metaphor, not an insight. Every evaluative word requires at least two levels of follow-up: what does fresh mean to you specifically, and what makes this message feel that way compared to others you have seen.
Mistake 5: Anchoring on the positive. Marketing teams naturally want to hear that their messaging works. Moderators who sense this expectation — or who share it — unconsciously spend more time exploring positive reactions and move quickly past negative ones. The most valuable signal in message testing is the negative reaction that the participant hesitates to voice. Probe the hesitation, not just the enthusiasm.
Mistake 6: Failing to test recall after a delay. Immediate reactions to a message tell you about comprehension and emotional response. They tell you nothing about memorability. The best marketing research interview designs include a recall check later in the conversation — returning to a message shown earlier and asking what the participant remembers. The gap between immediate reaction and delayed recall is the gap between ad liking and ad effectiveness.
How Does AI Moderation Change Marketing Research Interviews?
The core challenge in marketing research is consistency. When a human moderator runs the first interview at 9 a.m., they probe deeply, follow unexpected threads, and capture nuance. By the fifteenth interview at 4 p.m., fatigue has set in — the probes are shallower, the follow-up is less precise, and the moderator unconsciously steers toward the patterns they have already identified.
AI moderation eliminates this degradation entirely. The 200th interview receives the same depth of probing, the same non-leading follow-up methodology, and the same patience as the first. When a participant gives a one-word answer, the AI probes. When they contradict themselves, the AI explores the contradiction. When they show emotion, the AI follows the emotional thread five to seven levels deep. This consistency is particularly critical in message testing, where subtle differences in how a question is asked can shift the response pattern across an entire study.
User Intuition’s platform delivers this consistency at a scale and speed that matches marketing campaign timelines. A pre-launch message test of 200 consumers completes in 48-72 hours at $20 per interview — fast enough to inform the creative brief, affordable enough to test every campaign concept, and consistent enough that the data holds up across segments. The platform works across 50+ languages and draws from a panel of 4M+ participants, which means global marketing teams can test messaging across markets simultaneously rather than sequentially. Eric O., COO at RudderStack, described the depth of insight the methodology produces as transforming how his team approaches customer research. And with 98% participant satisfaction, the quality of engagement ensures that each conversation produces signal, not noise.
What to Do With the Responses
The questions in this guide produce qualitative data that is rich, specific, and immediately actionable — but only if the analysis matches the depth of the collection. Do not reduce open-ended responses to percentage-based summaries. Instead, organize findings by the decision they need to inform: which message to run, which audience to target first, which competitive claim to counter, and which creative direction to retire.
Build a structured marketing research template that maps each question category to a campaign decision. Pre-launch message testing findings feed directly into creative briefs. Brand perception data feeds into positioning strategy. Campaign evaluation findings feed into optimization plans and next-campaign planning. Competitive messaging analysis feeds into differentiation strategy.
The teams that extract the most value from marketing research are the ones that close the loop: every study informs a decision, every decision is tracked against outcomes, and every outcome informs the next study. That compounding cycle — research, decide, measure, learn, repeat — is how marketing teams move from occasional research consumers to continuous intelligence operators. The methodology is available. The economics work at $20 per interview. The questions are above. The only remaining variable is whether the research becomes a habit or stays an event.
For a complete view of how AI-moderated research fits into your marketing workflow, explore the brand health tracking solution and the full marketing teams hub.
Frequently Asked Questions
How do you sequence marketing research questions to avoid priming bias?
Start with broad context questions about how participants encounter marketing in the category generally. Then introduce specific messages or campaigns for evaluation without revealing brand attribution. Close with reflection questions testing what stuck after the conversation moved on. This mirrors how marketing actually works: consumers have context, encounter a message, and either retain it or forget it. Revealing the brand name or campaign purpose too early contaminates every subsequent response.
What is the most important question to ask in pre-launch message testing?
“In your own words, what is this message trying to tell you?” The gap between your intended message and the participant’s unprompted interpretation is the single most diagnostic signal in pre-launch testing. If consumers cannot articulate your core point without prompting, the message has a comprehension problem that no amount of media spend will fix. Follow up with “is there anything hard to believe?” to surface credibility gaps that kill campaign performance silently.
How do AI-moderated interviews handle follow-up probing on marketing questions?
The AI moderator dynamically adapts follow-up questions based on what each participant actually says, pursuing specific threads 5-7 levels deep. When a consumer says a message feels “trustworthy,” the AI probes what trustworthy means specifically, what evidence created that impression, and how it compares to competitor messaging. This consistent probing depth across every interview eliminates the degradation that occurs when human moderators conduct their 15th interview of the day.
How many creative concepts should be tested per interview session?
Show participants one concept at a time (monadic design) to avoid order effects and comparative bias. A 30-minute interview can thoroughly evaluate 2-3 concepts with full probing on comprehension, emotional response, believability, distinctiveness, and intended action. Testing more concepts per session reduces probing depth on each one. For larger concept batteries of 4-5 variants, split the sample so each participant evaluates a subset while the full set is covered across the study.