← Insights & Guides · Updated · 19 min read

Product Innovation Research Template for CPG Teams

By Kevin, Founder & CEO

Every CPG company runs innovation research. Most of it underperforms — not because the methodology is flawed, but because the brief that launches the study is incomplete. The innovation manager writes a vague objective. The screener criteria default to last quarter’s study. Success criteria get defined after the data comes back, which means they get reverse-engineered to justify whatever the results show. The discussion guide borrows questions from an unrelated category. And the findings land in a deck that nobody references when the actual go/no-go decision happens three weeks later.

The fix is not better research methodology. It is a better brief.

This template gives CPG product innovation, R&D, and category management teams a reusable framework for structuring every type of innovation study — concept screening, line extension validation, reformulation testing, new category entry, packaging evaluation. Each section forces you to make the decisions that determine whether a study produces actionable output or expensive noise. Fill it in before you launch. Use it every time. Watch your innovation hit rate climb.

Why CPG Teams Need a Standardized Innovation Research Template?


CPG innovation pipelines are not single-study affairs. In any given quarter, a mid-size CPG company might run concept screens on three new flavors, validate a line extension into an adjacent category, test reformulated versions of two legacy products, evaluate packaging redesigns for retail channel expansion, and explore pricing elasticity on a premium tier. Each of these is a distinct study type with different objectives, different consumer profiles, and different success criteria.

Without a standardized template, each study gets designed from scratch. The brand manager running the flavor screen structures it differently than the R&D lead running the reformulation test. The category manager exploring new channel entry uses a completely different discussion guide than the innovation director screening early-stage concepts. Six months later, when leadership asks “what did we learn this year about our consumers,” nobody can synthesize the findings because every study used a different framework, different terminology, and different success benchmarks.

This is the core problem: inconsistent methodology produces incomparable results. You cannot build institutional knowledge from a collection of one-off studies that each define “success” differently.

The second problem is access. In most CPG organizations, the insights team is a bottleneck. Category managers, R&D leads, and brand managers all need consumer evidence to make decisions, but they are not trained researchers. They know what they need to learn — they do not know how to structure a study that will teach it to them reliably. A standardized template solves this by encoding research best practices into a fill-in-the-blank format that anyone can use.

This is the democratization angle that separates modern innovation research from the agency model. When everyone works from the same base — the same hypothesis structure, the same consumer profiling framework, the same pre-defined success criteria, the same discussion guide scaffolding — the quality floor rises across the organization. The insights team stops spending half its time designing studies and starts spending that time interpreting results and connecting patterns across studies.

For the complete discipline of product innovation research, including where it fits in the product lifecycle and how it differs from concept testing, see the product innovation research complete guide. For how this template connects to a broader innovation research program, see the product innovation solution page. For a comparison of the platforms available to run templated innovation studies at scale, see the best product innovation platforms.

The CPG Innovation Brief Framework: 6 Sections


The framework has six sections. Each one addresses a specific failure mode in CPG innovation research. Skip any section and you introduce a predictable gap that will compromise your findings.

Section 1: Innovation Hypothesis

Every innovation study should begin with a testable hypothesis — a specific belief about what the consumer needs that you are trying to validate or invalidate. This is where most CPG briefs fail. They start with an objective like “understand consumer attitudes toward plant-based snacks” or “explore interest in a new beverage format.” These are topics, not hypotheses. A topic gives the moderator something to talk about. A hypothesis gives the analyst something to measure against.

A well-formed CPG innovation hypothesis has three components:

The consumer need statement. What you believe a specific consumer segment needs that is not being adequately met by existing options. This must be specific enough to be wrong. “Consumers want healthier snacks” is unfalsifiable. “Health-conscious snack consumers who currently buy brand X want a high-protein variant but will not sacrifice taste — and no competitor has solved this” is testable.

The category context. What is the competitive shelf set? Which products does your target consumer currently choose from, and where does your innovation concept fit relative to those options? A new sparkling water flavor competes in a different context than a new functional beverage — even if both are “beverages.” Define the shelf set your consumer actually navigates.

The market timing rationale. Why this innovation matters now. Is there a consumer trend accelerating demand? A competitive gap opening? A regulatory change creating opportunity? A supply chain capability you did not have before? The timing rationale forces you to articulate why this study is worth running today rather than next quarter.

Example: “Parents of children aged 4-10 who buy conventional fruit snacks weekly are actively looking for lower-sugar alternatives but reject existing options because they taste ‘like cardboard’ and their kids refuse to eat them. No brand has successfully delivered a lower-sugar fruit snack that kids actually request by name. Our reformulation achieves a 40% sugar reduction with taste parity in blind testing — this study validates whether the ‘lower sugar, same taste’ positioning resonates with the purchase decision-maker.”

Section 2: Consumer Profile

The consumer profile defines who you are studying and why they are the right people to test your hypothesis. In CPG, this goes well beyond demographics. You need to specify:

Category usage and purchase frequency. How often does your target consumer buy in this category? Weekly buyers have different expectations and reference points than monthly buyers. Someone who buys laundry detergent every six weeks evaluates a new formulation differently than someone who buys it weekly for a large household.

Brand relationships. Is your target a loyal user of your brand, a brand switcher, a competitor loyalist, or a lapsed user? Each relationship type produces different data. Loyal users tell you about brand stretch. Switchers reveal what triggers switching. Competitor loyalists reveal barriers to trial. Lapsed users explain what drove abandonment.

Demographic and psychographic filters. Age, income, household composition — the standard screener criteria. But also: health orientation, price sensitivity, sustainability consciousness, convenience priority. In CPG, psychographic segmentation often matters more than demographics. A 25-year-old and a 55-year-old who both prioritize clean ingredients may respond identically to a reformulated product.

Occasion mapping. When, where, and why does your target use this category? A snack consumed at a desk at 3pm serves a different need than the same snack eaten in a car during a morning commute. A cleaning product used for weekly deep cleaning has different performance expectations than one used for daily surface wipe-downs. Occasion context determines which product attributes matter most.

Example screener criteria for a new functional beverage concept: “Adults 25-45 who purchase ready-to-drink beverages at least twice per week, currently buy at least one functional/enhanced beverage monthly (e.g., Celsius, Poppi, Olipop, Liquid IV), shop for beverages at grocery and/or convenience stores, and self-identify as health-conscious but not willing to sacrifice taste for function.”

Section 3: Success Criteria (Define Before You Launch)

This is the most important section of the template and the one CPG teams most frequently skip. Success criteria must be defined before you see any data. If you define them after, you are not measuring success — you are rationalizing results.

Minimum unmet need strength threshold. What percentage of participants must articulate the need your innovation addresses as a genuine gap? If you hypothesized that parents want lower-sugar fruit snacks, but only 20% of parents mention sugar as a concern unprompted, your hypothesis may be wrong — or your consumer profile may be off. Set the threshold before you launch: “At least 60% of participants must identify sugar reduction as a top-three purchase consideration for this category.”

Concept appeal benchmarks. How do you define “strong appeal” versus “moderate interest” versus “rejection”? Anchor these to the category, not to abstract scales. In a commoditized category like paper towels, moderate interest might be sufficient. In a premium category like specialty beverages, you need strong emotional appeal to justify the price premium.

Purchase intent cutoffs. What level of stated purchase intent constitutes a go signal? Be specific about how you will interpret the data. “Would definitely buy” from 30%+ of the target is a strong signal. “Would probably buy” from 50% is ambiguous — it depends on category norms. Define your cutoffs relative to category benchmarks and past study results.

Acceptable concern frequency and severity. Every concept generates concerns. The question is not whether concerns exist but whether they are manageable. Define thresholds: “If more than 40% of participants raise the same concern, it must be addressed before proceeding. If a concern is mentioned by fewer than 15% of participants, it is noted but does not block progress.”

Competitive differentiation requirements. Can participants articulate why your concept is different from and better than their current solution? If they cannot, your positioning has a problem regardless of how much they like the concept in isolation.

Cannibalization risk thresholds (for line extensions). If you are extending an existing product line, what level of self-cannibalization is acceptable? A new flavor that steals 80% of its volume from your existing top seller is not a line extension — it is a substitution. Define the maximum acceptable cannibalization rate before you see the data.

The discipline of pre-defining success criteria prevents the most common pathology in CPG innovation research: the post-hoc rationalization cycle. Without pre-set thresholds, teams unconsciously adjust their interpretation to support the outcome they want. A concept that tested poorly becomes “directionally positive with iteration potential.” A line extension with high cannibalization risk becomes “evidence of strong brand affinity.” Pre-defined criteria eliminate this. The data either meets the threshold or it does not.

Section 4: Discussion Guide Scaffolding

The discussion guide is the operational heart of the study. For a 30-minute AI-moderated interview — the format that delivers the best depth-to-cost ratio for CPG innovation research — structure the conversation in five blocks:

Block 1: Category Context (5 minutes). Establish the participant’s current category behavior before introducing any concept. What do they currently buy in this category? How often? Where? What are they satisfied with? What frustrates them? What have they tried and abandoned? This block reveals the baseline against which your innovation will be evaluated. Do not skip it — concept reactions without category context are uninterpretable.

Block 2: Occasion Mapping (5 minutes). Drill into when, where, and why they use this category. What triggers a purchase? What triggers consumption? Are there occasions they wish they had a better option? Are there occasions where they use a substitute from a different category because nothing in this category fits? Occasion mapping reveals the white space your innovation could fill.

Block 3: Concept Reaction (10 minutes). Present the concept and capture first impressions, appeal drivers, concerns, and points of confusion. This is the longest block because it is where the depth matters most. First impressions are diagnostic — they reveal what the concept communicates before rational evaluation takes over. Appeal drivers tell you what is working. Concerns tell you what needs to change. Confusion tells you what your positioning is failing to communicate.

Block 4: Purchase Intent and Competitive Context (5 minutes). Would they buy this? What would they switch from? What would prevent them from trying it? How does it compare to their current solution? This block connects concept appeal to actual purchase behavior — the gap between “I like it” and “I would buy it” is where most CPG innovation research produces its most valuable insights.

Block 5: Price Sensitivity and Channel Expectations (5 minutes). Where would they expect to find this product? What would they expect to pay? Is there a price point at which interest drops sharply? Which retail channel fits the concept positioning? A premium functional beverage that consumers expect to find at Whole Foods has a different distribution strategy than one they expect at 7-Eleven.

AI-moderated platforms like User Intuition use structured laddering to probe 5-7 levels deep automatically on each response. When a participant says “I like the taste,” the AI follows up: what specifically about the taste? How does it compare to what you currently use? What would make it better? Would taste alone be enough to switch? This depth — applied consistently across every interview — is what separates actionable insight from surface-level preference data.

For a comprehensive list of questions to draw from when building your discussion guide, see 60 product innovation interview questions.

Section 5: Analysis Plan

Define how findings will be structured before the study launches — not after you are staring at transcripts wondering where to start.

Structure by theme, segment, or concept variant. For concept screens, organize findings by appeal drivers, concerns, and competitive positioning. For segmented studies, analyze within each segment first, then compare across segments. For multi-concept studies, create a comparison framework that uses the same evaluation criteria for each concept.

Define the audience and format. Who sees the results? The innovation committee needs a two-page executive summary with go/iterate/kill recommendations. The R&D team needs detailed consumer language around sensory expectations. The brand team needs positioning implications and competitive framing. One study, multiple deliverables — define them before fieldwork so analysis is structured to serve each audience.

Map decision criteria to success thresholds. Create a decision framework that connects directly to Section 3. If the concept meets all five success criteria, it advances to the next stage. If it meets three of five, it iterates with specific improvements. If it meets fewer than three, it is shelved or killed. This framework makes the post-study decision meeting a 15-minute calibration rather than a two-hour debate.

Connect to the broader innovation pipeline. Where does this study sit in the stage-gate process? What study comes next if the concept advances? What decision does this study unlock? Innovation research does not exist in isolation — each study should explicitly reference the studies that preceded it and the decisions it informs downstream.

Section 6: Knowledge Capture

The most expensive waste in CPG innovation research is not a failed study — it is a successful study whose findings are never referenced again. Industry data suggests that over 90% of qualitative research findings are effectively lost within 90 days of delivery. The deck goes on a shared drive. Nobody can find it. The next team that runs a study in the same category starts from zero.

Knowledge capture is not an afterthought. It is a section of the brief because it must be planned before the study launches.

Tag and categorize for future retrieval. Every study should be tagged by category, consumer segment, innovation type, and key themes. When a brand manager asks “what do we know about how millennials think about sustainability in the snack aisle,” the answer should be retrievable in minutes, not weeks.

Document cross-study connections. What previous studies does this one build on? What hypotheses from earlier research is this study testing? What findings from this study should inform future research? These connections are what transform a collection of discrete studies into a compounding intelligence asset.

Feed the intelligence hub. The market shifts faster than annual studies can track. Consumer preferences evolve. Competitors launch. Regulatory environments change. A customer intelligence hub that stores every conversation, codes every theme, and surfaces patterns across studies and time periods is what makes innovation research compound rather than depreciate. Each study makes the next one smarter — but only if findings are captured in a system designed for retrieval, not in a PDF on a shared drive.

Innovation Research Timeline: 8 Weeks From Hypothesis to Decision


The template above defines what to do. This timeline defines when to do it and who owns each phase.

Phase 1: Discovery (Weeks 1-2)

Owner: Innovation Manager + Consumer Insights Lead

WeekActivityDeliverable
Week 1Complete Sections 1-3 of the template (hypothesis, consumer profile, success criteria). Align with R&D on technical feasibility constraints. Brief marketing on positioning hypotheses.Completed innovation brief with pre-defined success criteria
Week 2Launch exploratory consumer study (50-100 interviews) to validate the unmet need hypothesis. Use Block 1 and Block 2 of the discussion guide (category context and occasion mapping).Unmet need validation data; occasion map; initial consumer language

Phase 2: Concept Development (Weeks 3-4)

Owner: R&D Lead + Brand Manager

WeekActivityDeliverable
Week 3Develop 2-3 concept articulations based on Discovery findings. R&D confirms formulation feasibility. Marketing drafts positioning statements for each concept.2-3 testable concepts with positioning language
Week 4Launch concept screening study (100-200 interviews) using Blocks 3-5 of the discussion guide (concept reaction, purchase intent, price sensitivity).Concept screening data with per-concept scoring against success criteria

Phase 3: Validation (Weeks 5-6)

Owner: Consumer Insights Lead + Category Manager

WeekActivityDeliverable
Week 5Analyze concept screening results against pre-defined success criteria. Identify the lead concept and specific refinements needed. Conduct competitive shelf set analysis.Go/iterate/kill recommendation for each concept; refinement brief for lead concept
Week 6Launch validation study on refined lead concept (75-150 interviews). Include competitive context, cannibalization assessment (for line extensions), and channel fit evaluation.Validation data with segment-level analysis and competitive positioning evidence

Phase 4: Refinement and Decision (Weeks 7-8)

Owner: Innovation Manager + Cross-Functional Steering Committee

WeekActivityDeliverable
Week 7Synthesize all research into a single innovation recommendation document. Map findings to the analysis plan from Section 5. Prepare the go/no-go presentation with customer evidence.Innovation recommendation with supporting consumer evidence
Week 8Present to steering committee. Decision meeting uses pre-defined success criteria as the evaluation framework. If approved, hand off to commercialization with consumer evidence package.Go/no-go decision; commercialization brief with consumer evidence; study archived in intelligence hub

With AI-moderated interviews on User Intuition, each research phase (Weeks 2, 4, 6) completes in 48-72 hours rather than 3-4 weeks — which is what makes the 8-week end-to-end timeline feasible. Traditional research timelines stretch this same process to 6-9 months.

Stakeholder Mapping: Who to Involve and When


Innovation research touches every function. The most common failure mode is involving the wrong people at the wrong phase — either too many stakeholders too early (paralysis by committee) or too few too late (findings rejected because key functions were not consulted).

R&D / Formulation Team

  • When: Phase 1 (feasibility input on hypotheses), Phase 2 (concept development), Phase 4 (refinement specifications)
  • What they need: Consumer language around sensory expectations, performance thresholds, and attribute trade-offs. R&D translates “consumers want it to taste less artificial” into formulation specifications.
  • How they use it: Feasibility assessment, formulation targets, reformulation priorities

Marketing / Brand Team

  • When: Phase 1 (positioning hypotheses), Phase 2 (concept articulation), Phase 4 (go-to-market brief)
  • What they need: Consumer language for positioning, emotional drivers of appeal, competitive differentiation from the buyer’s perspective. Marketing turns consumer evidence into packaging copy and campaign messaging.
  • How they use it: Positioning development, claims substantiation, launch messaging, media targeting based on segment profiles

Supply Chain / Operations

  • When: Phase 2 (feasibility check on production and distribution), Phase 4 (commercialization planning)
  • What they need: Projected volume based on purchase intent data, channel expectations from consumer interviews (where they expect to find the product), and format/size preferences that affect packaging and logistics.
  • How they use it: Production planning, distribution strategy, packaging specifications

Consumer Insights Team

  • When: All phases (they are the research operators)
  • What they need: The completed template. Their role is designing the study, managing fieldwork, analyzing results, and synthesizing findings into the deliverables each stakeholder function needs.
  • How they use it: Study design, analysis framework, cross-study pattern identification, intelligence hub curation

Category Management

  • When: Phase 1 (competitive shelf context), Phase 3 (competitive positioning and channel fit), Phase 4 (retail sell-in evidence)
  • What they need: Competitive shelf set data, shopper purchase intent by channel, cannibalization risk assessment for line extensions, and consumer evidence that supports retail buyer presentations.
  • How they use it: Shelf strategy, assortment recommendations, retail partner sell-in, promotional planning

Adapting the Template by Innovation Type


The six-section framework applies to every CPG innovation study, but the emphasis shifts depending on the innovation type.

Innovation TypeKey Template Shifts
Concept screeningSection 1 emphasizes broad exploration of the need space. Section 2 casts a wider consumer net. Section 3 uses directional thresholds (not hard cutoffs) because the goal is learning, not deciding. Section 4 spends more time in Blocks 1-2 (category context and occasion mapping) and less in Block 5 (price/channel).
Line extension testingSection 1 focuses on brand stretch hypotheses — will existing customers accept this variant under the parent brand? Section 2 narrows to existing brand users plus adjacent-brand switchers. Section 3 adds cannibalization risk as a primary success criterion. Section 4 adds questions about brand fit, variant expectations, and whether the extension enhances or dilutes brand perception.
Reformulation validationSection 1 tests a change hypothesis — will existing users accept the reformulated product? Section 2 targets current heavy users who have the strongest reference point for the existing product. Section 3 defines acceptable performance parity thresholds (the reformulation must not score worse than baseline on key attributes). Section 4 uses blind and branded evaluation sequences to isolate functional from brand-driven responses.
New category entrySection 1 is purely exploratory — no baseline exists. Section 2 draws from adjacent category users and lifestyle cohorts. Section 3 uses opportunity-sizing criteria rather than go/no-go thresholds. Section 4 spends the majority of time in Blocks 1-2 because understanding the category dynamics is the primary objective.
Packaging and design testingSection 1 tests communication hypotheses — does the packaging convey the intended positioning? Section 2 includes both category users and shelf-proximate shoppers (people who walk the aisle but do not currently buy in the category). Section 4 requires visual stimulus and adds shelf-context evaluation — does the package stand out, communicate the key benefit, and signal the right price tier?

Common Template Mistakes CPG Teams Make


Even with a standardized template, teams make predictable errors. Knowing them in advance helps you avoid them.

Skipping the hypothesis. The most common mistake. Teams write an objective (“explore consumer interest in X”) instead of a hypothesis (“we believe consumers want X because of Y, and no competitor has delivered Z”). Without a hypothesis, the study becomes a fishing expedition. You will learn something — but you will not know whether what you learned is meaningful because you had no prediction to test against.

Defining success criteria after seeing results. If you wait until the data comes back to decide what “good” looks like, you will unconsciously anchor your thresholds to whatever the data shows. A concept that scored poorly becomes “in line with category norms.” A concept that scored well becomes “exceeding expectations.” Pre-defined criteria are the only defense against confirmation bias.

Using the same consumer profile for every study type. A concept screen for a new-to-world product requires different participants than a reformulation test on an existing product. The concept screen needs category users and non-users who might enter the category. The reformulation test needs heavy users of the specific product being changed. Reusing the same screener across study types produces data from the wrong people.

Ignoring occasion context. Testing a breakfast product with people who do not eat breakfast. Evaluating a cleaning product for deep-cleaning occasions with participants who only do quick daily wipe-downs. Assessing a premium beverage concept with people who only buy beverages at convenience stores. Occasion mismatch is invisible in the data — the findings look valid but are disconnected from the actual use context that determines adoption.

Not connecting studies to the intelligence hub. Every study conducted in isolation is a depreciating asset. Its value peaks on delivery day and declines rapidly as market conditions shift and team memory fades. Studies connected to an intelligence hub appreciate over time — each one enriches the context for the next, surfaces cross-study patterns, and builds the institutional knowledge that separates CPG companies that understand their consumers from those that merely study them periodically.

How Do You Launch Your First Templated Study?


The template is designed to be completed in 5-10 minutes by someone who knows their category. Here is the end-to-end process:

Fill in the template. Work through all six sections. Section 1 (hypothesis) and Section 3 (success criteria) require the most thought. Sections 2, 4, 5, and 6 draw from category knowledge you already have. If you cannot complete a section, that is diagnostic — it means you do not yet have the clarity needed to run a useful study, and filling in the template just saved you from launching a study that would have produced ambiguous results.

Launch on User Intuition. Set up your study in 5 minutes. Define your target participant profile using the consumer profile from Section 2. Select or customize a discussion guide using the scaffolding from Section 4. Set your sample size based on the analysis plan from Section 5. The platform handles recruitment from a 4M+ verified panel, AI moderation using structured laddering methodology, and analysis with verbatim evidence. Sign up and launch your first study.

Results in 48-72 hours. Interviews are conducted asynchronously — participants complete them at their convenience, which produces more thoughtful responses than scheduled calls where participants are watching the clock. The AI moderator probes 5-7 levels deep on each response, producing transcript depth that matches or exceeds human-moderated interviews.

Review against pre-defined success criteria. This is where Section 3 pays off. Pull up your success thresholds. Compare the findings against each criterion. The decision is not “do we like these results” — it is “did the concept meet the thresholds we defined before we saw the data.” This transforms the review meeting from a subjective debate into an objective calibration.

Store in the intelligence hub for compounding value. Tag the study, document the cross-study connections, and archive the findings where they will be retrievable for future research. The first study is valuable. The tenth study on the same consumer segment — with nine prior studies enriching the context — is transformative.

For teams wanting help designing their first templated study or calibrating success criteria for their specific category, book a demo and walk through the framework with our team. For detailed pricing across different study sizes and innovation types, see product innovation research cost.

Frequently Asked Questions

A product innovation research template is a structured framework that defines the research objective, target consumer profile, success criteria, discussion guide, and analysis plan before a study launches. For CPG teams, it standardizes how concept screening, line extensions, packaging tests, and reformulation studies are designed — ensuring every study produces comparable, decision-ready output regardless of who runs it.
Start with Section 1 (Innovation Hypothesis) to define what you believe the consumer needs. Move to Section 2 (Consumer Profile) to specify who you're testing with — including category usage, purchase frequency, and brand relationships. Section 3 (Success Criteria) forces you to define what 'good enough' looks like before you see results.
Yes. The template is designed for product innovation managers, category managers, R&D leads, and brand managers — not just insights professionals. Each section includes guidance on what to fill in and why it matters. The discussion guide section provides ready-to-use question scaffolding that works directly with AI-moderated interview platforms like User Intuition.
For directional concept screening on a single idea, 10-15 interviews surface dominant patterns. For comparing 2-3 concepts, 25-50 interviews provide reliable differentiation. For segmented analysis across consumer profiles (e.g., health-conscious vs. convenience-driven), 100-200 interviews allow meaningful within-segment conclusions. At $20 per interview on User Intuition, even the largest study costs less than a single traditional agency engagement.
CPG innovation briefs must account for shelf context, occasion-based usage, sensory expectations, retail channel dynamics, and competitive set proximity. A SaaS research brief can ignore all of these. The template includes CPG-specific sections for category dynamics, competitive shelf set, occasion mapping, and retail channel considerations that general templates miss.
Line extension studies use the same framework but shift emphasis. Section 1 focuses on brand stretch hypotheses (will existing customers accept this variant?). Section 2 narrows to existing brand users. Section 3 adds cannibalization risk as a success criterion. Section 4 adds questions about brand fit, flavor/format expectations, and competitive alternatives within the parent brand's consideration set.
In practice, they're used interchangeably. A discussion guide is the moderator's roadmap — the sequence of topics, questions, and probes that structure the conversation. An interview guide typically refers to the same document but emphasizes the one-on-one format. This template provides a discussion guide framework that works for both AI-moderated and human-moderated interviews.
Define success criteria before the study launches — not after you see the data. This template forces you to specify: (1) minimum threshold for unmet need strength, (2) concept appeal benchmarks relative to the category, (3) purchase intent cutoffs, (4) acceptable concern frequency and severity, and (5) competitive differentiation requirements. Without pre-defined criteria, teams rationalize whatever the data shows.
Yes. The six-section framework — hypothesis, consumer profile, success criteria, discussion guide, analysis plan, and knowledge capture — applies to any product innovation research. The CPG-specific elements (shelf context, occasion mapping, retail channel dynamics, cannibalization risk) are the sections you would swap out.
The biggest timeline compression comes from running research phases in parallel rather than sequentially. Use AI-moderated interviews to complete 100+ consumer conversations in 48-72 hours instead of 4-6 weeks. Pre-define success criteria (Section 3) before fieldwork so analysis begins immediately when data arrives — no waiting for the team to agree on what 'good' looks like after the fact.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours