← Insights & Guides · Updated · 13 min read

CPG Market Research Template: Repeatable Research Framework

By Kevin, Founder & CEO

A CPG market research template is a reusable framework for designing, conducting, analyzing, and reporting consumer research. It is not a marketing strategy document with a “research” section bolted on — it is the actual scaffolding for running rigorous consumer studies that produce evidence-backed decisions.

Most “CPG research templates” available online are marketing strategy templates that have been mislabeled. They include sections on target market definition, competitive positioning, and channel strategy — which are outputs of research, not research itself. A real research template covers the upstream process: how to frame the question, who to ask, what to ask them, how to analyze what they say, and how to present the findings to stakeholders who will make the decision.

This guide provides five complete, reusable research templates for the five most common CPG research types: concept testing, brand health tracking, packaging validation, innovation pipeline screening, and consumer segmentation. Each template includes the research brief, discussion guide skeleton, analysis framework, and reporting format.

What Is the Universal CPG Research Brief Template?


Every CPG study starts with a research brief. This is the one-page document that aligns stakeholders on what the research will and will not answer. A clear brief prevents scope creep, misaligned expectations, and the most common research failure: answering a question nobody is actually making a decision about.

Research Brief: Standard Format

Study title: [Descriptive name, e.g., “Q2 Premium Snack Bar Concept Test”]

Business decision: What specific decision does this research inform?

  • Example: “We are deciding between three packaging redesigns for our premium snack bar line. Retail presentation deadline is April 15.”

What we know: What existing data informs this question?

  • Example: “NielsenIQ shows our velocity declined 12% in natural channel Q1. Exit survey data suggests packaging confusion at shelf. Our current packaging scores 4.2/5 on ‘quality perception’ in tracker.”

What we need to learn: What specific gap does this research fill?

  • Example: “Which of three packaging concepts (A, B, C) best communicates premium quality to natural channel buyers, and why? What are the specific design elements driving preference or rejection?”

Target audience:

  • Who: [Demographic and behavioral criteria]
  • Sample size: [Number of interviews, with rationale]
  • Key segments: [Any subgroups for separate analysis]
  • Example: “Adults 25-54 who purchase premium snack bars in grocery or natural channels at least 2x/month. 150 interviews total: 50 natural channel primary, 50 grocery primary, 50 online primary.”

Success criteria: What would a successful study produce?

  • Example: “A clear winner among the three concepts with articulated reasons for preference, specific design elements that drive or diminish purchase intent, and actionable recommendations for final design refinement.”

Methodology: [AI-moderated interviews, focus groups, survey, etc.]

Timeline: Launch date, expected completion, decision date

Stakeholders: Who will receive results and make the decision?

Budget: Total available, including any contingency for follow-up

Common Brief Failures in CPG

Too broad: “We want to understand how consumers feel about our brand.” This produces a study with no decision anchor. Everything is interesting; nothing is actionable.

Too narrow: “We want to know if consumers prefer blue or green packaging.” This produces a binary answer without the “why” that makes it useful.

Missing the decision: “We want to run a concept test because we always run concept tests at this stage.” If the concept is going to launch regardless of research results, the research has no decision to inform.

Right scope: “We are choosing between three concepts for our Q3 launch. We need to understand which concept best connects with our target on convenience and health values, and what specific modifications would strengthen the winner.”

Template 1: Concept Testing


Concept testing validates product ideas, packaging designs, and messaging with real consumers before committing to production or launch. (For the complete methodology, see our CPG concept testing guide.)

Research Brief: Concept Testing

Decision at stake: Which concept(s) advance in our innovation pipeline?

Typical sample: 50-200 verified category purchasers per concept (monadic design) or 100-300 total (sequential design)

Timeline: 48-72 hours with AI-moderated interviews

Budget: $1,000-$4,000 per concept (AI-moderated)

Discussion Guide: Concept Testing

Section 1: Category Context (5 minutes)

  • “Walk me through your typical buying pattern in [category]. How often, where, and what influences your choice?”
  • “What is your go-to product in this category? What keeps you buying it?”
  • Purpose: Establish the competitive context before concept exposure.

Section 2: Initial Reaction (5 minutes)

  • Present concept [describe format: concept board, packaging mockup, or text description]
  • “Tell me your initial reaction in your own words.”
  • “What is the first thing you noticed?”
  • “What does this remind you of?”
  • Purpose: Capture uncontaminated first impressions.

Section 3: Detailed Evaluation (10 minutes)

  • “What do you think this product is trying to be? Who is it for?”
  • “What is the single best thing about this concept?”
  • “What concerns or hesitations would you have about trying this?”
  • “How is this different from anything else you have seen in this category?”
  • Purpose: Evaluate appeal, differentiation, and barriers with laddering depth.

Section 4: Purchase Consideration (5 minutes)

  • “If this product were priced at [price], how does that feel relative to what you normally pay?”
  • “Imagine you see this on the shelf next to what you usually buy. What would make you pick it up? What might make you pass?”
  • “If you tried this and liked it, what would it replace in your regular rotation?”
  • Purpose: Assess purchase intent, price sensitivity, and competitive displacement.

Section 5: Improvement and Closure (5 minutes)

  • “What one change would make this concept significantly more appealing to you?”
  • “On a scale of ‘would walk right past’ to ‘would stop and pick it up,’ where does this land?”
  • Purpose: Identify optimization opportunities and capture overall assessment.

Analysis Framework: Concept Testing

DimensionCoding CategoriesWhat to Quantify
Initial reactionPositive / Neutral / Negative / Confused% in each category
Perceived positioningMatches target / Misaligned / Unclear% alignment
Key appeal driver[Emergent themes from responses]Theme prevalence
Primary barrierPrice / Trust / Need / Competition / OtherBarrier ranking
Purchase intentHigh / Moderate / Low / None% distribution
Suggested improvements[Emergent categories]Most common requests

Reporting Format: Concept Testing

Page 1: Executive Summary

  • Which concept wins and why (one paragraph)
  • Key metrics: preference %, appeal drivers, primary barriers
  • Recommendation: advance, refine, or kill

Page 2-3: Concept-by-Concept Analysis

  • For each concept: reaction distribution, key appeal, key barriers, representative verbatims
  • Cross-concept comparison on key dimensions

Page 4: Strategic Recommendations

  • What to preserve in the winning concept
  • What to modify before advancement
  • Risks to monitor
  • Suggested follow-up research

Template 2: Brand Health Tracking


Brand health tracking measures brand perception, competitive positioning, and consumer relationship over time. (For the full methodology, see brand health tracking for CPG.)

Research Brief: Brand Health

Decision at stake: Are we gaining or losing brand equity? Where are we vulnerable? Where are we strong?

Typical sample: 50-100 consumers per pulse (monthly or quarterly)

Timeline: 48-72 hours per wave

Budget: $1,000-$2,000 per wave; $12,000-$24,000 annually

Discussion Guide: Brand Health Tracking

Section 1: Category Relationship (5 minutes)

  • “Describe your relationship with [category] in one sentence.”
  • “Which brands come to mind when you think about [category]?”
  • Purpose: Measure unaided awareness and category engagement.

Section 2: Brand Perception (10 minutes)

  • “When I say [brand name], what are the first three words or images that come to mind?”
  • “If [brand name] were a person, how would you describe their personality?”
  • “What does [brand name] stand for that no other brand in this category can claim?”
  • Purpose: Track brand associations, personality, and differentiation over time.

Section 3: Competitive Context (5 minutes)

  • “Think about the last time you chose [brand] over an alternative. Walk me through that moment.”
  • “Name a brand in this category that has gotten better recently. What did they do?”
  • Purpose: Track competitive dynamics and switching triggers.

Section 4: Loyalty and Vulnerability (5 minutes)

  • “How has your relationship with [brand] changed over the past year?”
  • “What would [brand] have to do to lose you as a customer?”
  • Purpose: Track loyalty trajectory and vulnerability indicators.

Section 5: Improvement (5 minutes)

  • “If you were advising [brand]‘s CEO, what one thing would you tell them?”
  • “What is one thing you wish [brand] would change?”
  • Purpose: Surface actionable improvement opportunities.

Analysis Framework: Brand Health

Track these metrics consistently across waves:

MetricMeasurementTracking Method
Unaided awarenessFirst brand mentioned in category% over time
Brand associationsTop 3 words/imagesStability and shift
DifferentiationUnique claim identificationPresence and strength
Competitive threatBrands “getting better”Frequency of mention
Loyalty trajectoryRelationship direction (strengthening/weakening)% distribution over time
VulnerabilitySwitching triggers identifiedNew triggers per wave
Net promoter languageWould recommend with enthusiasm vs. reluctanceQualitative NPS proxy

Reporting Format: Brand Health

Dashboard (1 page):

  • Wave-over-wave trends for core metrics
  • Flag: metrics that shifted significantly this wave
  • Competitive alert: any competitor gaining positive mentions

Deep dive (2-3 pages):

  • What changed this wave and why (with verbatim evidence)
  • Emerging themes not captured in standard metrics
  • Recommendations for brand team action

Template 3: Packaging Validation


Packaging validation tests how design, messaging, and format affect consumer perception and purchase behavior.

Research Brief: Packaging

Decision at stake: Which packaging design best communicates our brand positioning at shelf?

Typical sample: 100-200 verified category purchasers

Timeline: 48-72 hours

Budget: $2,000-$4,000

Discussion Guide: Packaging Validation

Section 1: Current Packaging Context (3 minutes)

  • “Think about the packaging of products you buy in [category]. What does great packaging look like?”
  • “What about packaging makes you trust a product versus question it?”
  • Purpose: Establish packaging evaluation criteria before exposure.

Section 2: Initial Exposure (5 minutes)

  • Present packaging concept(s)
  • “Look at this packaging for a few seconds. Now tell me what you remember.”
  • “What quality level does this packaging suggest — budget, mainstream, or premium?”
  • Purpose: Test attention capture, information hierarchy, and quality inference.

Section 3: Detailed Evaluation (10 minutes)

  • “What information are you looking for on this packaging that you cannot find?”
  • “Compare this to what you usually buy. What stands out as different?”
  • “What does this packaging tell you about the company behind the product?”
  • Purpose: Evaluate information design, differentiation, and brand communication.

Section 4: Shelf Context (5 minutes)

  • “If you saw this from five feet away in a store, would you know what it is?”
  • “If choosing between this and [competitor packaging], which would you grab first?”
  • Purpose: Test shelf presence and competitive performance.

Section 5: Functional and Social Evaluation (5 minutes)

  • “Walk me through using this product at home. Does the packaging support that?”
  • “Does anything about this packaging make you question the product’s quality?”
  • Purpose: Test functional design and trust signals.

Analysis Framework: Packaging

DimensionMeasurementDecision Implication
First impression recallWhat consumers remember after brief exposureInformation hierarchy effectiveness
Quality-price inferencePerceived quality tierPrice-positioning alignment
Category recognitionInstant identification at distanceShelf legibility
Competitive preferenceChoice vs. competitor packagingShelf competitiveness
Information gapsMissing information identifiedLabel content decisions
Trust signalsTrust vs. doubt triggersDesign element decisions
Functional assessmentUsage/storage satisfactionStructural design decisions

Template 4: Innovation Pipeline Screening


Innovation screening evaluates multiple concepts quickly to identify winners before investing in full development. (See our product innovation research template for the full innovation framework.)

Research Brief: Innovation Screening

Decision at stake: Which 3-4 concepts from our pipeline of 10-15 deserve full development investment?

Typical sample: 30-50 interviews per concept (monadic) or 150-300 total (sequential)

Timeline: 1-2 weeks for full pipeline

Budget: $4,000-$10,000 for full pipeline

Discussion Guide: Innovation Screening

This is a compressed version of the concept testing guide, designed for rapid evaluation:

Section 1: Category Need (3 minutes)

  • “What frustrates you most about the current options in [category]?”
  • “Describe a moment in the past month when you wished a product existed that does not currently.”
  • Purpose: Establish unmet need landscape.

Section 2: Concept Reaction (10 minutes per concept)

  • Present concept
  • “Tell me your initial reaction.”
  • “Would this solve a real problem for you? What problem?”
  • “What is the biggest thing standing between you and buying this?”
  • Purpose: Rapid assessment of appeal, relevance, and barriers.

Section 3: Comparative Ranking (5 minutes, if sequential)

  • “Of the concepts you have seen, which one would you be most likely to buy? Why?”
  • “Which would you be least likely to buy? Why?”
  • Purpose: Relative ranking with motivation data.

Analysis Framework: Innovation Screening

MetricThresholdDecision
Spontaneous appeal (% positive initial reaction)>60%Advance
Problem-solution fit (% identifies real problem)>50%Advance
Purchase barrier severity (% dealbreaker barriers)<30%Advance
Comparative rank (top 3 of pipeline)Top 3Advance to full testing

Concepts that pass all thresholds advance to full concept testing. Concepts that fail on appeal or problem-solution fit are killed. Concepts that pass on appeal but have high barriers are candidates for refinement.

Template 5: Consumer Segmentation


Consumer segmentation identifies distinct groups within a category based on motivations, behaviors, and values — not just demographics.

Research Brief: Segmentation

Decision at stake: How should we tailor our brand strategy, messaging, and product portfolio by consumer segment?

Typical sample: 200-500 verified category purchasers

Timeline: 1-2 weeks

Budget: $4,000-$10,000

Discussion Guide: Consumer Segmentation

Section 1: Category Relationship (5 minutes)

  • “Describe your relationship with [category] in one sentence.”
  • “How much time and thought do you put into choosing [category] compared to other grocery purchases?”
  • Purpose: Measure category involvement and emotional engagement.

Section 2: Decision Process (10 minutes)

  • “Walk me through the last three times you bought [category]. Were they the same brand?”
  • “What information sources do you trust when deciding what to buy in this category?”
  • “When you think about spending on [category], is it a necessity, a treat, or something else?”
  • Purpose: Map decision patterns, information sources, and mental accounting.

Section 3: Values and Priorities (10 minutes)

  • “How does [category] fit into your broader values about food, health, and spending?”
  • “Walk me through what matters most: taste, convenience, health benefits, price, brand. Rank them.”
  • “If your household budget got tighter, what would change about how you buy [category]?”
  • Purpose: Surface the value hierarchy and price elasticity by segment.

Section 4: Brand Relationships (5 minutes)

  • “Tell me about a time you switched brands in this category.”
  • “What would the perfect [category] product look like for someone exactly like you?”
  • Purpose: Map switching behavior and unmet needs by segment.

Analysis Framework: Segmentation

Step 1: Individual coding Tag each participant on: category involvement (high/medium/low), primary decision driver (taste/health/price/convenience/brand), switching behavior (loyal/rotator/deal-driven), value orientation (quality/value/indulgence/wellness).

Step 2: Cluster identification Group participants by motivation patterns, not demographics. Common CPG segments:

  • Health-driven premium buyers: High involvement, quality-first, willing to pay more
  • Convenience optimizers: Low involvement, habitual, time-sensitive
  • Deal navigators: Price-driven, brand-flexible, promotion-responsive
  • Brand loyalists: Emotionally attached, resistant to switching, quality-premium
  • Explorers: Variety-seeking, trend-influenced, willing to try new products

Step 3: Segment profiling For each segment: size estimate, motivation hierarchy, brand perceptions, switching triggers, unmet needs, and recommended brand strategy.

Reporting Format: Segmentation

Page 1: Segment Overview

  • Visual segment map with sizes and key descriptors
  • One-sentence positioning for each segment

Pages 2-6: Segment Deep Dives (one per segment)

  • Motivation hierarchy (laddering results)
  • Purchase behavior and brand relationships
  • Representative verbatims
  • Strategic implications: messaging, product, pricing, and channel recommendations

Page 7: Cross-Segment Strategy

  • Where segments overlap and where they diverge
  • Portfolio strategy recommendations
  • Priority segments for investment

How Do You Build a Research Program from These Templates?


These five templates are designed to work as an integrated system, not isolated tools.

The Monthly Cadence

Week 1: Brand health pulse (Template 2) — 50-100 interviews, $1,000-$2,000 Week 2: Topic-specific study based on brand health findings — concept, packaging, or innovation screening (Templates 1, 3, or 4) Week 3: Analysis and stakeholder reporting Week 4: Follow-up study if needed, or segmentation pulse (Template 5)

This cadence delivers 24-48 studies per year for $24,000-$96,000 — replacing the traditional model of 3-5 annual studies at $100,000-$300,000.

The Intelligence Hub Advantage

When all five template types feed into the same Intelligence Hub, cross-study connections emerge automatically:

  • Brand health findings connect to concept test results
  • Segmentation data enriches packaging validation insights
  • Innovation screening results build on previous concept test learnings
  • Every study makes every future study more valuable

This compounding effect is what transforms CPG research from a project-based expense into a strategic asset. The templates ensure consistency across studies; User Intuition’s Intelligence Hub ensures accumulation across time.

Adapting Templates for AI Moderation

All five templates work with AI-moderated interviews. For AI-moderated studies:

  • Load primary questions into the platform
  • The AI moderator handles all follow-up probing automatically using 5-7 level laddering
  • Analysis and reporting are generated automatically
  • Results feed User Intuition’s Intelligence Hub for longitudinal analysis

For human-moderated studies, use the full discussion guides with timing notes and branching logic.

Ready to use these templates with real consumers? Launch a free study with 30 AI-moderated interviews in 48 hours. No credit card required. Or book a demo to walk through how the templates work inside the platform.

Frequently Asked Questions

A complete CPG market research template includes five components: (1) Research brief — the business question, decision at stake, and success criteria. (2) Target audience definition — screener criteria, sample size, and segment allocation. (3) Discussion guide — primary questions and probing framework.
A CPG research brief answers four questions: What business decision does this research inform? What do we already know (and what is the specific gap)? Who do we need to hear from? What would a successful study look like — what findings would change our approach? The brief should be one page maximum. If it takes more than one page, the research objective is not specific enough.
The best discussion guide format for CPG research follows a funnel structure: warm-up (category behavior), context setting (purchase journey), core investigation (the specific research objective), comparative evaluation (alternatives and competition), and synthesis (overall assessment and barriers). Each section has 2-3 primary questions with branching follow-up probes. Total primary questions: 10-12 for a 30-minute interview.
CPG research analysis follows a three-layer approach: (1) Individual-level coding — tag each response by theme, sentiment, and motivation level. (2) Cross-participant synthesis — identify patterns across respondents, quantify theme prevalence, and surface contradictions. (3) Strategic interpretation — connect findings to the business decision, identify actionable recommendations, and flag areas requiring further investigation.
Core templates (research brief, discussion guide structure, analysis framework) should be stable — update annually or when methodology changes. Topic-specific modules (concept testing questions, brand health metrics, packaging evaluation criteria) should be updated quarterly based on what is working. The Intelligence Hub tracks which questions produce the most actionable insights, enabling evidence-based template optimization.
Yes. These templates are designed for both AI-moderated and human-moderated interviews. For AI-moderated studies, load the primary questions and probing objectives into the platform — the AI moderator handles adaptive follow-up probing automatically. For human-moderated studies, the full discussion guide with branching logic and timing notes is provided.
A market research template structures how you gather consumer evidence — study design, questions, analysis, reporting. A marketing strategy template structures how you plan campaigns, allocate budget, and measure marketing performance. Many online 'CPG research templates' are actually marketing templates that include sections on target market, competitive positioning, and channel strategy.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours