← Insights & Guides · Updated · 30 min read

CPG Consumer Research Interview Questions by Objective

By Kevin, Founder & CEO

Effective CPG consumer research starts with asking the right questions — and the right questions depend entirely on what you are trying to learn. A concept testing interview requires fundamentally different questions than a brand switching study or a packaging validation session. Using generic questions across all CPG research objectives produces generic insights.

This guide provides 75 field-tested interview questions organized by the seven most common CPG research objectives: concept testing, brand health tracking, packaging and design validation, consumer segmentation, claims testing and validation, product innovation, and brand switching research. Each question includes the laddering principle that makes it effective and the follow-up probes that turn surface responses into motivation hierarchies.

These questions work in both human-moderated and AI-moderated interview settings. In AI-moderated interviews, the follow-up probes happen automatically through 5-7 level laddering — the moderator adapts to each participant’s responses and probes deeper where the most important insights surface.

How Do You Use This Guide?


Do not use all 75 questions in a single study. Select 8-12 primary questions that map to your specific research objective, and trust the follow-up probes to generate depth. A 30-minute AI-moderated interview with 10 well-chosen questions and deep probing produces dramatically more actionable insight than a 30-minute interview racing through 25 questions at the surface level.

For each question below, you will find:

  • The question — ready to use in a discussion guide
  • Why it works — the research principle behind it
  • Follow-up probes — how to go deeper
  • What to listen for — signals that indicate you have found something actionable

Concept Testing Questions (1-12)


Use these when evaluating new product concepts, line extensions, or reformulations with target consumers. The goal is to understand not just whether consumers like a concept, but why — and whether that “why” connects to motivations strong enough to drive purchase behavior. (For the full concept testing methodology, see our CPG concept testing guide.)

1. “Tell me your initial reaction to this concept in your own words.”

Why it works: Captures the unstructured, unfiltered first impression before any framing questions anchor the participant’s thinking. The language consumers use spontaneously reveals what stands out and what they mentally categorize the product as.

Follow-up probes:

  • “What was the very first thing you noticed?”
  • “What does this remind you of?”
  • “If you had to describe this to a friend, what would you say?”

What to listen for: Whether the participant frames the concept in your intended category or a different one. If your premium granola bar concept gets categorized as “another protein bar,” your positioning has a problem that no amount of marketing will fix.

2. “Walk me through a typical week of buying [category]. Where, when, and how do you decide what to buy?”

Why it works: Establishes the purchase context before evaluating the concept. This grounds the conversation in real behavior rather than hypothetical preferences. A consumer who buys snack bars at a gas station on the way to work evaluates concepts differently than one who buys them during a weekly Costco run.

Follow-up probes:

  • “What typically triggers you to buy [category] versus just grabbing what is already at home?”
  • “When you are standing in front of the shelf, what is going through your mind?”
  • “How much time do you typically spend choosing?”

What to listen for: Purchase triggers, decision timing, competitive consideration set, and the role of habit versus deliberation. This context determines whether your concept needs to interrupt a habit (hard) or capture a deliberation moment (easier).

3. “Imagine you see this product on the shelf next to what you usually buy. What would make you pick it up? What might make you pass?”

Why it works: Forces the participant to evaluate the concept in a competitive context rather than in isolation. “Would you buy this?” in a vacuum produces inflated purchase intent. “Would you choose this over what you usually buy?” produces realistic evaluation.

Follow-up probes:

  • “What specifically about it caught your attention versus the brands around it?”
  • “You mentioned [reason for passing]. Tell me more about why that would stop you.”
  • “What would this product need to change for you to try it?”

What to listen for: The specific barriers to trial. If the barrier is price, that is a positioning problem. If the barrier is trust in a new brand, that is a different problem. If the barrier is “I do not understand what this is,” that is a packaging and messaging problem.

4. “What do you think this product is trying to be? Who do you think it is for?”

Why it works: Tests positioning clarity. If the consumer’s perception of who the product is for does not match your target, the concept has a positioning gap. This question also reveals whether the concept feels aspirational, practical, indulgent, or functional — which maps directly to marketing strategy.

Follow-up probes:

  • “What makes you say that?”
  • “Do you see yourself as someone this product is for? Why or why not?”
  • “If this were repositioned for someone exactly like you, what would need to change?”

What to listen for: Whether the perceived target matches the actual target. Misalignment here is one of the most common and most costly concept testing failures.

5. “If this product were priced at [price], how does that feel relative to what you normally pay in this category?”

Why it works: Anchors price evaluation to the participant’s actual reference price, not an abstract “would you pay $X?” question. The word “feel” is deliberate — it invites emotional response (too expensive, a good deal, suspicious) rather than rational calculation.

Follow-up probes:

  • “What would you expect a product like this to cost?”
  • “At what price would this feel like a no-brainer?”
  • “At what price would you start questioning the quality?”

What to listen for: The price-quality inference. In CPG, price communicates positioning. A price that is too low signals low quality. A price that is too high creates a barrier. The sweet spot is where price confirms the positioning without creating friction.

6. “What concerns or hesitations would you have about trying this for the first time?”

Why it works: Directly surfaces purchase barriers. Consumers are more willing to articulate concerns than to fake enthusiasm. The barriers they identify are the ones your marketing and packaging need to address.

Follow-up probes:

  • “What would it take to overcome that concern?”
  • “Have you ever had a bad experience trying a new product in this category? What happened?”
  • “If a friend recommended this specifically, would that change your hesitation?”

What to listen for: Whether barriers are addressable (lack of information, unfamiliar brand) or structural (wrong category, wrong price tier, wrong occasion).

7. “If you tried this product and liked it, how likely is it to become part of your regular rotation? What would it replace?”

Why it works: Tests the replacement mechanism. In CPG, new product adoption almost always means displacing an existing product. Understanding what gets replaced — and why — reveals the competitive frame and the switching motivation.

Follow-up probes:

  • “What would this product need to deliver in that first experience to earn a second purchase?”
  • “What is the product you would stop buying to make room for this? What is not quite right about that product?”
  • “How many products in this category do you typically rotate between?”

What to listen for: Whether the concept has a clear displacement target or is a “nice to have” addition. Products with clear displacement targets have stronger adoption curves.

8. “What is the single best thing about this concept? What is the single weakest thing?”

Why it works: Forces prioritization. When participants can list multiple positives and negatives, they tend to generate laundry lists. Forcing a single best and single worst reveals the dominant perception.

Follow-up probes:

  • “Why is that the most important strength? What does it give you that other products do not?”
  • “How big of a problem is that weakness? Is it a dealbreaker or just a nice-to-have improvement?”

What to listen for: Whether the “best thing” maps to a purchase driver and whether the “worst thing” maps to a purchase barrier. If the best thing is a feature that does not drive purchase and the worst thing is a feature that does, the concept has a structural problem.

9. “How is this different from anything else you have seen in this category?”

Why it works: Tests perceived differentiation. Many CPG concepts offer incremental improvements that consumers cannot articulate as meaningfully different. If a participant struggles to explain how this is different, the concept lacks differentiation — regardless of what the R&D team believes.

Follow-up probes:

  • “Is that difference important to you? Would you pay more for it?”
  • “Have you seen other brands try to do something similar?”
  • “Does this feel like a real innovation or just a different version of what already exists?”

What to listen for: Whether differentiation is perceived as meaningful or marginal. “It is a little bit different” is not the same as “this solves something nothing else does.”

10. “Walk me through the moment you would decide to repurchase this versus going back to what you usually buy.”

Why it works: Tests repurchase intent through scenario construction rather than a direct “would you buy again?” question. By asking the participant to construct the decision moment, you learn the evaluation criteria they would apply at the point of repurchase.

Follow-up probes:

  • “What would this product need to deliver in the first use to earn that repurchase?”
  • “How many times would you need to try it before it became your go-to?”
  • “What could happen in that first experience that would make you never buy it again?”

What to listen for: The repurchase threshold. Some categories have low switching costs (try once, adopt or drop). Others require multiple positive experiences. Understanding the repurchase mechanism shapes launch strategy.

11. “If this product came in three varieties, which variety would you want to try first, and why?”

Why it works: Tests variety architecture and entry point strategy. The variety consumers want to try first reveals which product attribute drives initial trial. This informs SKU prioritization and launch sequencing.

Follow-up probes:

  • “What is it about that variety that draws you in?”
  • “Would you eventually try the others, or is that one variety enough for you?”
  • “Is there a variety that is missing from these options?”

What to listen for: Whether consumers see the variety as a portfolio to explore or a single-item decision. This determines whether your line extension strategy drives incrementality or cannibalization.

12. “On a scale of ‘I would walk right past it’ to ‘I would stop and pick it up,’ where does this land — and what would move it one step higher?”

Why it works: Combines a behavioral scale (rooted in the real shopping experience) with an improvement prompt. The “one step higher” question identifies the specific change that would increase conversion.

Follow-up probes:

  • “What specifically would you notice first that would make you stop?”
  • “If the packaging were different but the product were the same, would that change anything?”

What to listen for: Whether the limiting factor is awareness (they would not notice it), consideration (they would notice but not pick it up), or conversion (they would pick it up but not buy it). Each requires a different intervention.

Brand Health Questions (13-24)


These questions track brand perception, equity, and competitive positioning over time. Use them in quarterly pulse studies or dedicated brand health research. (See our brand health tracking guide for CPG for the full tracking methodology.)

13. “When I say [brand name], what are the first three words or images that come to mind?”

Why it works: Captures top-of-mind brand associations without priming. The spontaneous associations reveal the brand’s actual position in the consumer’s mind, which may differ significantly from the intended positioning.

Follow-up probes:

  • “Where do those associations come from? Was it advertising, product experience, or something else?”
  • “Have those associations changed over the past year?”
  • “If you had to pick just one word, what would it be?“

14. “If [brand name] were a person, how would you describe their personality?”

Why it works: Accesses brand personality through projection. Consumers can articulate personality traits about a brand they cannot articulate as abstract brand attributes. This reveals the emotional relationship with the brand.

Follow-up probes:

  • “Would you want to be friends with this person? Why or why not?”
  • “How is this person different from [competitor brand] as a person?“

15. “Think about the last time you chose [brand name] over an alternative. Walk me through that moment.”

Why it works: Grounds brand evaluation in a real purchase decision rather than abstract preference. The details of the decision moment reveal the actual purchase drivers.

Follow-up probes:

  • “What was the alternative you considered?”
  • “What tipped you toward [brand name] in that moment?”
  • “Was that a typical decision for you, or was something different about that occasion?“

16. “What would [brand name] have to do to lose you as a customer?”

Why it works: Identifies the brand’s loyalty floor — the actions or failures that would trigger switching. This reveals what consumers value most about the brand (the things they would leave over if lost).

Follow-up probes:

  • “Has [brand name] ever come close to losing you? What happened?”
  • “If a friend told you they stopped buying [brand name], what reason would you guess?“

17. “How has your relationship with [brand name] changed over the past year?”

Why it works: Captures brand trajectory from the consumer’s perspective. Brands do not hold static positions — they are gaining or losing meaning in consumers’ lives. This question reveals the direction.

Follow-up probes:

  • “What caused that change?”
  • “Are you buying more or less of [brand name] than a year ago?”
  • “Is there a specific moment that shifted your perception?“

18. “Name a brand in this category that has gotten better recently. What did they do?”

Why it works: Identifies competitive threats through positive competitor perception. A competitor that consumers perceive as “getting better” is gaining brand equity, even if their market share has not yet reflected it.

Follow-up probes:

  • “Has that changed how you think about [your brand]?”
  • “What specifically did they improve — the product, the packaging, the messaging, or something else?“

19. “What does [brand name] stand for that no other brand in this category can claim?”

Why it works: Tests brand differentiation and defensibility. If consumers cannot identify something unique, the brand is substitutable. The specific language they use reveals which attributes the brand truly owns.

Follow-up probes:

  • “How important is that to you when making purchase decisions?”
  • “Could another brand credibly claim the same thing?“

20. “If [brand name] disappeared from shelves tomorrow, what would you buy instead and how would you feel about it?”

Why it works: Tests brand essentiality. If consumers can easily name a substitute and express little concern, the brand has low switching costs. If they struggle to find an equivalent and express genuine loss, the brand has strong equity.

Follow-up probes:

  • “Would that substitute be just as good, or would you feel like you were settling?”
  • “How long would you look for [brand name] before switching?“

21. “What is one thing you wish [brand name] would change about their products?”

Why it works: Surfaces unmet needs and improvement opportunities directly from loyal consumers. These are often incremental improvements that strengthen the relationship rather than radical innovations.

Follow-up probes:

  • “How important is that change to you? Would you buy more if they made it?”
  • “Have you seen other brands do that well?“

22. “When you see [brand name] advertising, what message are they trying to communicate?”

Why it works: Tests advertising effectiveness and message clarity. If consumers cannot articulate the brand’s intended message, the advertising is not working — regardless of recall metrics.

Follow-up probes:

  • “Does that message resonate with you? Why or why not?”
  • “What would be a more compelling message for someone like you?“

23. “How do you feel about [brand name]‘s price relative to what you get?”

Why it works: Tests perceived value, not just price sensitivity. Consumers evaluate price relative to the total value package — product quality, brand trust, packaging, convenience, and emotional benefits.

Follow-up probes:

  • “Has your perception of the value changed recently?”
  • “At what point would the price become too high for you to justify?”
  • “Is there a cheaper alternative that you think delivers similar value?“

24. “If you were advising [brand name]‘s CEO on how to win more customers like you, what would you tell them?”

Why it works: Invites the consumer to be a strategic advisor rather than a passive evaluator. This framing often produces the most candid and actionable feedback because it legitimizes constructive criticism.

Follow-up probes:

  • “What is the biggest mistake they are making right now?”
  • “What is the biggest opportunity they are missing?”

Packaging and Design Questions (25-36)


These questions evaluate packaging concepts, design changes, and shelf presence. Use them in packaging validation research or when testing design iterations.

25. “Look at this packaging for a few seconds. Now tell me what you remember.”

Why it works: Tests initial attention capture and information hierarchy. What consumers remember after a brief exposure reveals what the packaging actually communicates versus what the design team intended it to communicate.

Follow-up probes:

  • “What stood out most? Why?”
  • “What did you not notice that you think is important?”
  • “Based on what you remember, what do you think this product is?“

26. “What quality level does this packaging suggest — budget, mainstream, or premium? What makes you say that?”

Why it works: Tests quality-price inference. Packaging is the primary quality signal for CPG products at shelf. If the packaging signals a quality level that does not match the price, the product faces a perception gap.

Follow-up probes:

  • “What specific elements of the packaging communicate that quality level?”
  • “If you saw this at [price], would the packaging match your expectations?“

27. “Compare this packaging to what you usually buy in this category. What stands out as different?”

Why it works: Tests differentiation in a competitive context. Packaging that looks distinctive in isolation may blend in on a shelf surrounded by the competitive set.

Follow-up probes:

  • “Is that difference positive, negative, or neutral?”
  • “Would this packaging make you more or less likely to try the product?“

28. “What information are you looking for on this packaging that you cannot find?”

Why it works: Identifies information gaps that create purchase barriers. If consumers are looking for ingredient lists, certifications, usage instructions, or brand stories that are not prominent, the packaging fails at its information delivery function.

Follow-up probes:

  • “How important is that information to your purchase decision?”
  • “Where would you expect to find it?“

29. “If you were choosing between this packaging and [competitor packaging], which would you grab first and why?”

Why it works: Simulates the shelf decision in a direct comparison. This reveals which design elements drive selection behavior versus preference in isolation.

Follow-up probes:

  • “What about the other packaging is more appealing?”
  • “Is this a head decision or a gut decision?“

30. “What does this packaging tell you about the company behind the product?”

Why it works: Tests brand story communication through packaging. In CPG, packaging is often the only brand touchpoint. What consumers infer about the company from packaging drives trust and purchase consideration.

Follow-up probes:

  • “Does that align with the kind of company you want to buy from?”
  • “What would make you trust this company more based on the packaging alone?“

31. “How would you feel about buying this packaging in the presence of other people?”

Why it works: Tests social acceptability and aspirational value. Some CPG purchases are socially visible (grocery cart, pantry, counter). Packaging that consumers feel self-conscious about has a hidden adoption barrier.

Follow-up probes:

  • “Is that important to you in this category?”
  • “What about the packaging creates that feeling?“

32. “Walk me through how you would use this product at home. Does the packaging support that use?”

Why it works: Tests functional packaging design — resealability, portion control, storage, dispensing. Functional failures drive non-repurchase even when the product itself is excellent.

Follow-up probes:

  • “What is frustrating about packaging in this category generally?”
  • “What is the best packaging you have ever used in any food product?“

33. “If this packaging changed to [new version], what would you think happened to the product inside?”

Why it works: Tests packaging change perception. Consumers infer product changes from packaging changes. A redesign intended to modernize the brand may be interpreted as a reformulation, a quality downgrade, or a price increase.

Follow-up probes:

  • “Would that concern you?”
  • “Would you need to try the product again to verify it is the same?“

34. “Which of these three packaging options feels most ‘right’ for this product? Not which one you like most — which one fits the product?”

Why it works: Separates aesthetic preference from brand fit. Consumers may prefer a packaging design aesthetically while recognizing it does not fit the product category or brand positioning.

Follow-up probes:

  • “What makes it feel ‘right’?”
  • “Which feels most wrong? Why?“

35. “If you saw this packaging from five feet away in a store, would you know what it is?”

Why it works: Tests shelf legibility and category recognition at distance. Many packaging designs that look beautiful in close-up renderings fail at shelf distance, where category cues need to be immediately recognizable.

Follow-up probes:

  • “What would you assume it is from that distance?”
  • “What would need to be bigger or more prominent for you to recognize it instantly?“

36. “Does anything about this packaging make you question the product’s authenticity or quality?”

Why it works: Surfaces trust barriers. In an era of counterfeits, private label competition, and greenwashing, consumers are increasingly skeptical. Packaging elements that trigger doubt — fonts that feel generic, claims that seem exaggerated, colors that feel cheap — create invisible purchase barriers.

Follow-up probes:

  • “What specifically triggers that concern?”
  • “What would make you more confident in the product’s quality?”

Consumer Segmentation Questions (37-46)


These questions help identify distinct consumer groups within a category, understand their motivations, and map their decision processes. Use them in segmentation studies or when building consumer personas grounded in real behavior.

37. “Describe your relationship with [category] in one sentence.”

Why it works: Reveals the emotional role of the category in the consumer’s life. Consumers who say “it is just something I need to buy” are fundamentally different from those who say “it is one of the small pleasures in my day.”

Follow-up probes:

  • “Has that relationship changed over the years?”
  • “What would change that relationship?“

38. “Walk me through the last three times you bought [category]. Were they the same brand each time?”

Why it works: Maps actual purchase behavior rather than stated loyalty. The pattern of same brand versus variety-seeking versus deal-driven purchasing reveals the consumer’s relationship with the category and with specific brands.

Follow-up probes:

  • “What drove the decision each time?”
  • “Were any of those purchases influenced by a promotion or deal?”
  • “Is that pattern typical for you?“

39. “What information sources do you trust when deciding what to buy in this category?”

Why it works: Maps the influence ecosystem for each segment. Some consumers trust product labels, others trust influencers, others trust friends, others trust professional reviews. The trusted information source reveals the marketing channel most likely to reach each segment.

Follow-up probes:

  • “Have your trusted sources changed in the past few years?”
  • “Is there a source you actively distrust?“

40. “How much time and thought do you put into choosing [category] compared to other grocery purchases?”

Why it works: Maps category involvement level. High-involvement consumers read labels, compare options, and make deliberate choices. Low-involvement consumers grab the familiar option or the best deal. Different segments require different marketing approaches.

Follow-up probes:

  • “What makes this category more or less important to think about?”
  • “Are there times when you think about it more carefully than others?“

41. “Tell me about a time you switched brands in this category. What happened?”

Why it works: Reveals switching triggers and barriers through real experience rather than hypothetical scenarios. The specific event that triggered a switch is more predictive than abstract switching intent.

Follow-up probes:

  • “Was it a planned decision or a spur-of-the-moment choice?”
  • “Did you ever go back to the previous brand?”
  • “What would make you switch again?“

42. “What would the perfect [category] product look like for someone exactly like you?”

Why it works: Reveals unmet needs by asking consumers to design their ideal rather than evaluate what exists. The gaps between the ideal and current options represent innovation opportunities.

Follow-up probes:

  • “What is the most important feature of that perfect product?”
  • “Does anything close to that exist today?”
  • “How much more would you pay for that perfect product versus what you buy now?“

43. “How does [category] fit into your broader values about food, health, and spending?”

Why it works: Connects category behavior to personal values — the top of the laddering hierarchy. Value-driven consumers (organic, sustainable, local) behave differently from price-driven consumers, who behave differently from convenience-driven consumers.

Follow-up probes:

  • “Have those values changed recently?”
  • “Are there categories where you are willing to compromise on those values?”
  • “Is [category] one where values matter more or less than other grocery categories?“

44. “When you think about spending money on [category], do you think of it as a necessity, a treat, or something else?”

Why it works: Identifies the mental accounting frame. Products positioned as necessities compete on value and reliability. Products positioned as treats compete on pleasure and indulgence. The frame determines the competitive set and the purchase decision criteria.

Follow-up probes:

  • “Does that change depending on the specific product within the category?”
  • “How much does price sensitivity change when you think of it as [necessity/treat]?“

45. “Who else in your household influences what you buy in this category?”

Why it works: Identifies the purchase decision unit. Many CPG purchases involve multiple stakeholders — children requesting brands, partners with dietary preferences, household members with allergies or restrictions. Understanding the influence network maps the actual decision process.

Follow-up probes:

  • “Whose preferences take priority when there is a disagreement?”
  • “Do you ever buy different products in this category for different household members?“

46. “If your household budget got tighter, what would change about how you buy [category]?”

Why it works: Tests price elasticity and downtrading behavior. This reveals which product attributes are essential versus dispensable, and which consumers would switch to private label, reduce quantity, or leave the category entirely.

Follow-up probes:

  • “Would you buy a cheaper brand, buy less, or stop buying altogether?”
  • “What is the last thing you would give up about what you currently buy?”

Claims Validation Questions (47-56)


These questions test product claims for believability, relevance, and motivation power. Use them in claims validation research, regulatory preparation, or messaging development.

47. “Read this claim: [claim]. In your own words, what is it saying?”

Why it works: Tests comprehension before testing believability. A claim that consumers misunderstand cannot be effective regardless of how true or compelling it is. The participant’s paraphrase reveals whether the claim communicates what you intend.

Follow-up probes:

  • “Is there any part of that claim that is confusing?”
  • “What would make the claim clearer?“

48. “How believable is this claim? What makes you believe it or doubt it?”

Why it works: Separates believability from relevance. A claim can be believable but unimportant, or important but unbelievable. This question isolates the trust dimension.

Follow-up probes:

  • “What evidence would you need to see to fully believe this?”
  • “Does the brand’s reputation affect how believable this is?”
  • “Have you seen similar claims from other brands? Did you believe those?“

49. “If this claim is true, does it make you more likely to buy this product? Why or why not?”

Why it works: Tests motivation power. Many claims pass the believability test but fail the “so what” test. A claim can be true and believed but not motivating because it addresses an attribute the consumer does not value.

Follow-up probes:

  • “On a scale from ‘would not change anything’ to ‘would definitely make me switch,’ where does this land?”
  • “What claim would make you more likely to buy?“

50. “Rank these three claims from most to least important to your purchase decision.”

Why it works: Forces prioritization among competing claims. When all claims test positively in isolation, ranking reveals which one should lead in packaging and advertising.

Follow-up probes:

  • “What makes [top-ranked claim] more important than [second-ranked claim]?”
  • “Could any of these claims actually make you less likely to buy?“

51. “Does this claim set an expectation that the product might not live up to?”

Why it works: Tests the risk of over-promise. Claims that set unrealistic expectations drive trial but destroy repurchase when the experience fails to match. This question surfaces the risk before launch.

Follow-up probes:

  • “What would happen if you tried the product and it did not fully deliver on this claim?”
  • “Would you give the brand a second chance?“

52. “Have you ever felt misled by a product claim in this category? What happened?”

Why it works: Reveals category-level skepticism that affects how your specific claim will be received. If consumers have been burned by similar claims from competitors, your claim starts at a trust deficit.

Follow-up probes:

  • “What did the brand do that felt misleading?”
  • “Does that experience affect how you evaluate claims from other brands in this category?“

53. “If you saw this claim on the shelf without any other information, what would you assume about the product?”

Why it works: Tests the inference chain. Claims generate assumptions beyond their literal content. A “low sugar” claim might imply taste compromise. A “made with real fruit” claim might imply higher price. Understanding these inferences prevents unintended consequences.

Follow-up probes:

  • “Are those assumptions positive or negative?”
  • “Would you want to verify any of those assumptions before buying?“

54. “Which of these claims feels most relevant to the problems you actually experience in this category?”

Why it works: Tests relevance to real consumer needs rather than abstract importance. The claim that maps to an actual experienced problem has the highest motivation potential.

Follow-up probes:

  • “Tell me about the last time you experienced that problem.”
  • “What are you currently doing to solve it?“

55. “Does this claim make you think of this product differently than others in the category?”

Why it works: Tests differentiation through claims. A claim that does not separate the product from alternatives is a wasted claim — it may be true and believed and relevant, but if competitors can make the same claim, it does not drive preference.

Follow-up probes:

  • “Could any other brand in this category credibly make this claim?”
  • “If [competitor] made the same claim, would it be equally believable?“

56. “If this product had to pick one claim to put on the front of the package, which of these would make you most likely to pick it up?”

Why it works: Forces the shelf-level hierarchy decision. Packaging real estate is limited. The claim that drives pickup behavior at shelf deserves front-of-pack placement.

Follow-up probes:

  • “Why that one over the others?”
  • “Where would you expect to find the other claims?”

Product Innovation Questions (57-66)


These questions support innovation pipeline research, from early-stage opportunity identification through concept refinement. Use them in innovation sprints, white space exploration, or line extension evaluation. (For the full framework, see our product innovation research template for CPG.)

57. “What frustrates you most about the current options available in [category]?”

Why it works: Identifies unmet needs through frustration. Consumer frustrations are innovation opportunities. The strongest innovations solve problems consumers have learned to live with because they assumed no solution existed.

Follow-up probes:

  • “How do you currently work around that frustration?”
  • “How much would you pay to solve it?”
  • “Have you seen any brand try to address it?“

58. “Describe a moment in the past month when you wished a [category] product existed that does not currently.”

Why it works: Captures unmet needs through specific moments rather than abstract wants. Moment-based needs are more predictive of purchase behavior than hypothetical needs.

Follow-up probes:

  • “What were you doing when that moment happened?”
  • “What did you end up doing instead?”
  • “How often does that moment occur?“

59. “If you could combine the best elements of two different products in this category, what would you create?”

Why it works: Reveals the attribute combinations that consumers value but that no single product delivers. This maps directly to product development specifications.

Follow-up probes:

  • “Which product provides the first element? What specifically about it?”
  • “Which product provides the second? What specifically?”
  • “Why does no product combine both today?“

60. “Walk me through what matters to you in this order: taste, convenience, health benefits, price, brand. Rank them and tell me why.”

Why it works: Maps the attribute hierarchy for innovation prioritization. The rank order determines which attributes to optimize for and which are secondary. The “why” behind the ranking reveals whether the hierarchy is stable or context-dependent.

Follow-up probes:

  • “Does that ranking change depending on the occasion?”
  • “Has that ranking changed in the past few years?“

61. “Think about a new product you recently tried and loved in any food category. What made it successful?”

Why it works: Identifies the consumer’s personal innovation adoption criteria through a positive real-world example. What they valued in a recent successful adoption predicts what they will value in your innovation.

Follow-up probes:

  • “How did you first discover it?”
  • “What made you try it the second time?”
  • “What would a product in [target category] need to do to create that same feeling?“

62. “If a brand you love launched a product in [new category], what would you expect from it?”

Why it works: Tests brand stretch potential. Understanding what consumers expect from a brand in a new category reveals whether the extension is credible and what attributes must carry over.

Follow-up probes:

  • “What aspects of that brand would need to be present in the new product?”
  • “What would make you skeptical about them entering this category?“

63. “What is the most recent food or beverage trend you have noticed? Do you think it will last?”

Why it works: Maps trend awareness and skepticism. Consumers who are early adopters perceive trends differently from mainstream consumers. The durability assessment helps separate fads from lasting shifts.

Follow-up probes:

  • “Have you personally tried products tied to that trend?”
  • “What would make the trend fade?“

64. “If you had to choose between a product that was perfect on taste but average on health, versus perfect on health but average on taste, which would you choose in this category?”

Why it works: Forces the core tradeoff in many CPG innovation decisions. The forced choice reveals the dominant decision criterion when optimization across all attributes is impossible.

Follow-up probes:

  • “How average is average? Is there a minimum threshold on either?”
  • “Does this change by eating occasion?“

65. “What is one thing about [category] that has not changed in years but probably should?”

Why it works: Identifies stale category conventions that consumers accept because they have no alternative. These are high-potential innovation targets because they address latent dissatisfaction.

Follow-up probes:

  • “Why do you think it has not changed?”
  • “If someone changed it, would you try their product?“

66. “How would you react if [brand] launched this concept at [price]? Walk me through your thought process.”

Why it works: Tests the complete commercial proposition — brand credibility, concept appeal, and price acceptance together. This simulates the real purchase decision more accurately than testing each element separately.

Follow-up probes:

  • “What if it were from a brand you had never heard of?”
  • “What if the price were [higher/lower]?”

Brand Switching Questions (67-75)


These questions investigate why consumers switch brands, what triggers the switch, and what would bring them back. Use them in competitive intelligence research, win-back campaigns, or churn prevention.

67. “Tell me about the last time you switched from one brand to another in [category]. What triggered it?”

Why it works: Captures the switching trigger through recent experience. The trigger is often situational (out of stock, promotion, recommendation) rather than attitudinal (dissatisfaction). Understanding the trigger type determines the prevention strategy.

Follow-up probes:

  • “Was it a single event or a gradual process?”
  • “Did you plan to switch or did it happen in the moment?”
  • “How long had you been buying the previous brand?“

68. “What was the ‘permission moment’ that made you comfortable trying a new brand?”

Why it works: Identifies the psychological barrier reduction that enables switching. Even when consumers are dissatisfied, switching requires a moment where they feel “allowed” to try something different — a friend’s recommendation, a risk-free trial, a coupon, or a product sample.

Follow-up probes:

  • “Would you have switched without that moment?”
  • “How long were you thinking about switching before you actually did?“

69. “Compare your experience with your old brand versus the new one. Where does each win?”

Why it works: Maps the competitive advantage structure from the consumer’s perspective. Where the old brand wins reveals retention opportunities. Where the new brand wins reveals the competitive threat.

Follow-up probes:

  • “Is there anything your old brand does better that you miss?”
  • “If your old brand improved on [the area where the new brand wins], would you switch back?“

70. “What would your old brand need to do to get you back?”

Why it works: Identifies the specific win-back conditions. These are the most actionable insights for brand teams because they come from real lapsed customers rather than hypothetical scenarios.

Follow-up probes:

  • “Is there anything they could do today, or is it too late?”
  • “Would a promotion or discount be enough, or does the product itself need to change?“

71. “Think about the private label or store brand alternative in this category. How does it compare to what you usually buy?”

Why it works: Maps private label competitive threat. In CPG, private label is the most significant competitive force across categories. Understanding where consumers perceive parity versus premium difference determines brand defensibility.

Follow-up probes:

  • “Have you tried the store brand? What was the experience?”
  • “What would make you switch to the store brand permanently?”
  • “What keeps you paying more for the national brand?“

72. “If you discovered that [brand] increased their price by 15%, what would you do?”

Why it works: Tests price-driven switching thresholds. In an inflationary environment, understanding the price increase tolerance per segment informs pricing strategy and trade promotion planning.

Follow-up probes:

  • “At what percentage increase would you definitely switch?”
  • “Would you switch to another brand, to private label, or reduce how often you buy?“

73. “What brands in this category are you aware of but have never tried? What is stopping you?”

Why it works: Identifies the consideration set boundaries and the barriers to trial. Brands that consumers know about but have not tried represent a specific type of competitive challenge — the barrier is not awareness but rather conversion.

Follow-up probes:

  • “What would it take for you to try one of those brands?”
  • “Is there a brand in this category that you would never try? Why?“

74. “Think about someone who used to buy [brand] and switched away. What reason would you guess they had?”

Why it works: Uses third-person projection to surface brand vulnerabilities that consumers may be reluctant to state directly. “Why would someone else leave?” is easier to answer honestly than “why did you leave?” when the consumer is still a current buyer.

Follow-up probes:

  • “Is that a reason you have ever considered yourself?”
  • “Do you think that reason is becoming more or less common?“

75. “If you could send one message to [brand]‘s CEO about their product, what would it be?”

Why it works: Invites maximum candor by framing the consumer as an advisor to leadership. This question often surfaces the single most important piece of feedback because the consumer is forced to prioritize.

Follow-up probes:

  • “Why is that the most important message?”
  • “How long have you felt that way?”
  • “Do you think they know this?”

How Do You Use These Questions in AI-Moderated Interviews?


These 75 questions are designed to work in both human-moderated and AI-moderated interview settings, but the AI moderation context changes how they function in two important ways.

First, AI moderation handles the follow-up probing automatically. When you load a primary question into an AI-moderated study, the moderator adapts its probing in real-time based on the participant’s response. If a participant mentions price as a switching trigger, the AI probes deeper on price sensitivity. If another participant mentions packaging, the AI probes deeper on design perception. This adaptive probing means the same primary question generates different depth paths for different participants — capturing the full range of consumer perspectives without a human moderator deciding where to probe.

Second, AI moderation eliminates moderator bias in question delivery. Human moderators unconsciously vary tone, emphasis, and follow-up direction based on their expectations and fatigue level. Interview #5 gets different probing than interview #50. AI moderation delivers consistent probing across all interviews, which makes cross-participant analysis more reliable.

Building a CPG Research Program Around These Questions

The most effective CPG research programs use these questions as a rotating library rather than a static guide:

  1. Monthly pulse studies (8-10 questions): Select questions from one or two categories per month. January: brand health + segmentation. February: packaging + claims. March: innovation + switching.

  2. Event-triggered studies (10-12 questions): When a competitor launches, a price change happens, or a category disruption occurs, pull relevant questions and launch a study within hours.

  3. Longitudinal tracking: Ask the same 5-6 brand health questions quarterly to track movement over time. The Intelligence Hub connects responses across quarters automatically.

At $20 per interview and 48-72 hours per study, you can use these questions continuously rather than saving them for one annual study.

Ready to use these questions with real consumers? Launch a free study with 30 AI-moderated consumer interviews in 48 hours. No credit card required. Or book a demo to walk through how the AI moderator handles follow-up probing in real time.

Frequently Asked Questions

The questions depend on your research objective. For concept testing, focus on initial reactions, purchase consideration, and value perception. For brand health, probe brand associations, competitive perception, and loyalty drivers. For packaging, ask about first impressions, quality signals, and shelf differentiation. The key principle across all CPG research: ask 'why' at least 5 levels deep to move from stated preferences to underlying motivations.
A 30-minute AI-moderated interview typically covers 8-12 primary questions with extensive follow-up probing. A 45-minute human-moderated interview covers 10-15 primary questions. The depth comes from follow-up probes, not the number of initial questions. Over-stuffing an interview guide with 25+ questions produces surface-level responses across all topics rather than deep understanding of a few critical areas.
Laddering is a questioning technique that traces the chain from product attributes to functional benefits to emotional benefits to personal values. In CPG, a consumer might say they prefer a particular packaging design (attribute). Through laddering, you discover it signals freshness (functional benefit), which means they feel like a good parent providing healthy food (emotional benefit), which connects to their core value of family wellbeing.
Three rules: never include the desired answer in the question ('Don't you think this packaging looks premium?'), never provide a scale that anchors high ('On a scale of 1-10, how much do you love this?'), and never reference what other consumers said ('Most people prefer version A — what about you?'). AI-moderated interviews follow non-leading language protocols automatically, which eliminates moderator bias from the study design.
Most of these questions are designed for depth interviews where follow-up probing is possible. Survey versions would need to be adapted — removing open-ended laddering in favor of structured response options. However, the research objectives and question topics transfer directly. The difference is that surveys will tell you what consumers prefer, while interviews using these questions with follow-up probing will tell you why.
Present competitors neutrally: 'Walk me through the brands you consider when buying [category]' rather than 'What do you think about [specific competitor]?' Let consumers bring up competitors organically, then probe. When you need to name a competitor, present it alongside your brand and 1-2 others so no single brand is highlighted. Never ask consumers to evaluate your brand first and competitors second.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours