← Reference Deep-Dives Reference Deep-Dive · 8 min read

Consumer Insights for CPG FEI: Co-Creation, Screening, and MVP

By Kevin

The average CPG brand launches 3-5 new products annually. Industry data shows 85% fail within their first year. The gap between these numbers represents billions in wasted capital and thousands of careers derailed by products that never found their market.

The root cause isn’t a mystery. Most FEI (Front End Innovation) processes rely on intuition layered with selective consumer feedback. Teams conduct focus groups that validate existing assumptions. They run concept tests that measure interest but miss the “why.” They launch MVPs that answer whether people will buy, but not what would make them buy more, buy again, or tell others.

Leading CPG innovation teams are rebuilding their FEI process around three distinct phases of consumer insight: co-creation to surface unmet needs, screening to validate concepts against real purchase criteria, and MVP testing to optimize before scale. Each phase requires different methodologies, different sample sizes, and fundamentally different questions.

The Economics of Getting FEI Right

A typical CPG product launch costs $2-5 million when accounting for R&D, tooling, initial production runs, and go-to-market expenses. The opportunity cost runs higher - every failed launch delays the next potential winner by 12-18 months.

Research from the Product Development & Management Association reveals that companies with systematic FEI processes achieve 2.5x higher innovation success rates. The difference isn’t just process discipline. It’s knowing which consumer questions to ask at which stage, and having the infrastructure to get answers fast enough to matter.

Traditional research timelines compound the problem. Six weeks to recruit and moderate focus groups. Four weeks for concept testing. Eight weeks for usage and attitude studies. By the time insights arrive, market windows narrow and competitive threats emerge. Speed matters, but only when paired with depth.

Phase One: Co-Creation and Needs Discovery

The strongest innovations emerge from understanding jobs-to-be-done that existing products serve poorly or ignore entirely. This requires moving beyond stated preferences into behavioral observation and contextual inquiry.

Traditional ethnographic research delivers rich insights but scales poorly. A typical in-home study reaches 12-20 households over 4-6 weeks. The sample size limits confidence in findings. The timeline creates pressure to move forward before patterns fully emerge.

Modern approaches combine scale with depth through AI-moderated conversations that probe systematically while adapting to individual responses. Teams can reach 100-200 consumers in the discovery phase, exploring category usage patterns, unmet needs, and moments of friction. The methodology matters less than the question design.

Effective co-creation interviews follow a progression. Start with category context: how people currently solve the problem, what triggers purchase occasions, where current solutions disappoint. Move into needs articulation: what outcomes matter most, what tradeoffs they accept, what would justify switching. End with concept reaction: not to validate specific ideas, but to understand which benefit territories resonate and why.

The output isn’t a single “winning concept.” It’s a map of the opportunity space with clear sight lines to unmet needs, acceptable price points, and credible benefit claims. Teams should emerge with 3-5 concept territories worth testing, each grounded in specific consumer language about problems and desired outcomes.

Sample size matters here. Fifteen interviews might surface interesting hypotheses. Fifty interviews start revealing patterns. One hundred interviews provide confidence that the patterns reflect broader market reality rather than sample quirks. The math changes based on category complexity and target audience diversity, but the principle holds: co-creation requires enough conversations to distinguish signal from noise.

Phase Two: Concept Screening Against Purchase Reality

Concept testing traditionally measures purchase intent on 5-point scales. “Definitely would buy” scores above 40% signal green lights. This approach optimizes for the wrong outcome - it identifies concepts that test well, not concepts that will succeed in market.

The gap emerges because survey-based concept tests don’t replicate actual purchase decisions. Real shoppers evaluate products in context: shelf sets with 47 other options, price points relative to familiar alternatives, benefit claims competing for attention against established brands. They make decisions under time pressure, often while managing children or thinking about dinner plans.

Effective concept screening recreates decision context. Show the concept alongside competitive products. Probe purchase barriers explicitly: what would stop you from buying this, what questions would you need answered, what price would feel too high. Explore benefit credibility: which claims feel believable, what proof would you need, which ingredients or features matter most.

The methodology shifts from measuring interest to diagnosing obstacles. A concept might score 65% purchase intent but fail because 40% of interested consumers doubt the brand can deliver on the core benefit. Another might score 35% but reveal a passionate niche willing to pay premium prices. The screening phase should identify which concepts have viable paths to market, not which concepts sound appealing in isolation.

Sample sizing for screening requires different math than discovery. Discovery seeks to map the full opportunity space - larger samples improve coverage. Screening seeks to predict market performance - sample composition matters more than size. Two hundred consumers who match target demographics provide more signal than 500 general population respondents.

The output from concept screening should be ruthlessly specific. Not “Concept B tested best” but “Concept B shows 45% unaided interest among parents of children under 5, with primary barriers around price sensitivity and skepticism about the sustainability claim. Optimization path requires reformulating the value proposition around convenience rather than environmental benefit, with price point at $4.99 rather than $5.99.”

Phase Three: MVP Testing for Optimization

The MVP phase answers different questions than screening. Screening validates whether a concept deserves investment. MVP testing optimizes execution before committing to full production runs.

This distinction matters because the research design changes fundamentally. Screening can work with concept boards and descriptions. MVP testing requires physical product, actual packaging, real usage experiences. The questions shift from “would you buy this” to “what would make this better.”

Effective MVP testing follows a usage cycle. Send product to 50-100 target consumers. Let them use it in natural context for 7-14 days depending on category. Conduct structured interviews exploring first impressions, usage occasions, moments of delight or disappointment, and likelihood of repeat purchase.

The interview structure should probe systematically across key dimensions. Packaging: what caught attention, what communicated value, what created confusion. First use: what surprised them, what met expectations, what required explanation. Ongoing use: how it fit into routines, what occasions it served, what alternatives it replaced. Repurchase intent: what would bring them back, what price would feel fair, what would make them recommend it.

Sample size for MVP testing balances statistical confidence with practical constraints. Fifty users might reveal obvious problems but miss edge cases. Two hundred users provide robust signal across different usage patterns and household types. The right number depends on category complexity and the importance of getting launch right - established brands with distribution can iterate post-launch, while challenger brands need to nail it initially.

The output from MVP testing should drive specific product and positioning changes. Not “consumers liked it” but “67% of users found the portion size too small for family dinners, suggesting either a larger format or repositioning as an individual meal solution. The sustainability messaging tested poorly with 71% saying they didn’t believe the claims, but the convenience benefit resonated strongly with 82% saying it saved meaningful time versus alternatives.”

Connecting the Phases: From Insights to Decisions

The three phases work as a system, not a sequence. Insights from MVP testing might reveal unmet needs that send teams back to co-creation. Concept screening might uncover benefit territories worth exploring more deeply before moving forward.

The key is maintaining clear decision criteria at each gate. Co-creation should produce 3-5 testable concepts with clear hypotheses about target consumers, key benefits, and credible proof points. Screening should narrow to 1-2 concepts with validated purchase drivers and identified optimization paths. MVP testing should deliver go/no-go recommendations with specific execution requirements.

Speed matters throughout but varies by phase. Co-creation benefits from taking time to reach saturation - the point where additional interviews stop revealing new patterns. This might take 2-3 weeks with modern platforms versus 6-8 weeks with traditional methods. Concept screening can move faster since the questions are more bounded - 1-2 weeks versus 4-6 weeks traditionally. MVP testing depends on category usage cycles but typically compresses to 2-3 weeks versus 8-12 weeks for traditional usage and attitude studies.

The cumulative timeline matters for competitive positioning. A compressed FEI process completing in 6-8 weeks versus 20-24 weeks creates strategic options. Teams can test more concepts in the same calendar time. They can respond to competitive moves without abandoning in-flight innovation. They can iterate based on early market feedback rather than committing to 12-month production runs.

Building the Infrastructure for Systematic Innovation

Most CPG companies struggle with FEI not because they lack smart people or good ideas, but because they lack infrastructure for systematic consumer insight. Research happens episodically, managed by different agencies using different methodologies. Insights live in PowerPoint decks rather than accessible databases. Learning from one launch rarely informs the next.

Leading innovation teams are building permanent infrastructure for consumer insight. This means standardized research protocols that make findings comparable across concepts and categories. It means technology platforms that make launching studies as simple as sending an email. It means insight repositories that let teams search previous findings before commissioning new research.

The infrastructure question extends to sample management. Traditional research recruits fresh samples for every study, losing the ability to track how perceptions evolve. Modern approaches maintain panels of category users who can be re-contacted for longitudinal research. This enables tracking how trial converts to repeat purchase, how usage patterns evolve, and how competitive dynamics shift over time.

Technology choices matter but shouldn’t drive methodology. The best platforms make good research easier rather than making easy research possible. They should support proper question design, enable natural conversations rather than rigid surveys, and deliver insights in formats that inform decisions rather than requiring interpretation.

The Path Forward for CPG Innovation Teams

The opportunity for CPG brands isn’t just faster research or cheaper insights. It’s fundamentally rethinking how consumer understanding informs innovation decisions. This requires moving beyond episodic research projects toward continuous insight generation.

Start by auditing current FEI processes against three questions. First, do we understand unmet needs deeply enough to generate concepts that solve real problems rather than creating products seeking problems? Second, do we test concepts against actual purchase criteria or just measure interest? Third, do we optimize MVPs based on usage reality or launch based on concept test scores?

The answers reveal where to invest in upgrading insight infrastructure. Teams weak on needs discovery should focus on scaling co-creation research. Teams with strong concepts but poor conversion should emphasize screening methodology. Teams launching products that disappoint should strengthen MVP testing.

The investment required isn’t primarily financial. Modern research platforms cost less than traditional agencies while delivering more depth and speed. The real investment is organizational - building research protocols, training teams on question design, and creating decision frameworks that connect insights to action.

The competitive advantage goes to brands that make this investment now. As AI-powered research tools democratize access to consumer insights, the differentiator won’t be having insights but knowing which insights matter when. That requires systematic processes, clear decision criteria, and infrastructure that makes good research the default rather than the exception.

CPG innovation has always been about understanding consumers deeply enough to create products they’ll love. The tools for achieving that understanding are evolving rapidly. The brands that master the new tools while maintaining methodological rigor will define the next generation of category winners.

Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours