← Reference Deep-Dives Reference Deep-Dive · 6 min read

How to Conduct Consumer Research for a New Product Launch in CPG

By Kevin, Founder & CEO

Consumer research for a new CPG product launch follows a specific sequence: validate the need, refine the positioning, and stress-test launch readiness. Skipping phases or running them out of order is the most reliable predictor of launch failure, more than any single research finding.

The failure rate tells the story. Depending on the source, 70-85% of new CPG products fail within 18 months of launch. Nielsen’s analysis attributes the majority of failures not to product quality but to misreading consumer needs, poor positioning, or insufficient differentiation. These are research problems, not manufacturing problems.

The Launch Failure Rate Problem


The CPG industry spends billions on new product development annually. Yet the success rate has barely improved in two decades. The issue is not a lack of research — most large CPG companies conduct some form of consumer testing. The issue is what they test, when they test it, and how they interpret results.

Three failure patterns recur with striking consistency. First, teams validate the product but not the need. A concept scores well in isolation, but the consumer already has a perfectly adequate solution in their pantry. Second, teams test with the wrong consumers. General population samples dilute signal from the heavy category buyers who will actually drive trial. Third, teams front-load research into the ideation phase and then stop, missing the critical refinements that happen when you test positioning, packaging, and purchase intent as separate variables.

The common denominator is treating consumer research as a gate to pass rather than an iterative input that shapes the product throughout development.

Pre-Launch Research Phases


Effective pre-launch research has three distinct phases, each with different objectives and methods.

Phase 1: Need Validation (12-10 weeks before launch). The goal is not to test your concept — it is to understand the category landscape from the consumer’s perspective. What products do they currently buy? What frustrates them? What workarounds have they invented? What language do they use to describe the category? This phase should surface unmet needs and reveal whether your concept addresses a real gap or an imagined one. Run 50-80 conversations with verified category purchasers. The output is a validated problem statement and consumer language map that feeds directly into positioning. For CPG brands, this phase often reveals that the unmet need exists but the team has framed it incorrectly.

Phase 2: Positioning and Concept Refinement (8-6 weeks before launch). Now you test the concept, but in context. Show consumers the concept alongside their current category options. Test multiple positioning angles — not just “do you like this?” but “would you switch from your current product to this, and why?” This phase requires 80-120 conversations across key segments (heavy vs. light buyers, brand-loyal vs. brand-switchers, different demographic cohorts). The output is a rank-ordered set of positioning claims with consumer verbatim supporting each. Platforms like User Intuition enable this scale within days, running hundreds of AI-moderated interviews that probe deeply into purchase motivation and switching triggers.

Phase 3: Launch Readiness (4-2 weeks before launch). This is the stress test. Show the final packaging, the shelf set, the pricing, and the purchase context. Ask consumers to walk through their actual shopping process with this product in the mix. Do they notice it? Do they pick it up? Do they understand what it is? Would they buy it at this price? This phase catches the issues that earlier, more abstract testing misses — the packaging that is beautiful but unclear, the price point that is reasonable in isolation but feels wrong next to the category leader, the brand name that consumers cannot remember or pronounce.

Consumer Language as Positioning Input


One of the most underutilized outputs of pre-launch research is consumer language. The words consumers use to describe their needs, frustrations, and desires are almost always different from the words that brand teams use internally.

This gap matters because positioning that uses internal language falls flat with consumers. A team might describe their product as “a premium functional beverage with adaptogenic ingredients.” Consumers say “something that actually helps me focus without the jitters.” The second framing wins on shelf because it mirrors how people think about the category.

AI-moderated interviews are particularly effective at capturing language because they generate verbatim transcripts at scale. When you run 200 conversations, you can identify the phrases that recur across segments — the natural vocabulary of the category. These phrases become positioning inputs that resonate because they originated with consumers, not copywriters. The complete guide to consumer insights for CPG covers language mining methodology in more detail.

Testing with Verified Category Purchasers


The single highest-impact improvement most CPG teams can make to their pre-launch research is tightening their participant criteria. Testing a new yogurt concept with “adults 25-54 who grocery shop” will produce meaningless data. Testing it with “people who bought yogurt at least twice in the past month and can name the brand they bought” produces insights you can build a launch plan around.

Verification matters because category experience shapes evaluation. A heavy category buyer evaluates a new product against their established mental model — they know what good looks like, they have reference prices, they have brand preferences and switching costs. A light buyer evaluates in a vacuum, which inflates purchase intent scores and masks real competitive dynamics.

User Intuition’s panel of 4M+ vetted participants enables screening for specific purchase behaviors, not just demographics. This means you can build samples of verified category purchasers and run deep product innovation research conversations that reflect how real buyers will respond.

The 90-Day Pre-Launch Research Calendar


Here is a practical timeline for a CPG product launch with consumer research embedded at each stage.

Weeks 12-10: Need Validation Sprint. Launch 60-80 AI-moderated interviews with category purchasers. Focus on current behavior, pain points, and unmet needs. Synthesize into a consumer needs framework and language map. Total elapsed time: 5-7 days for fieldwork, 3-4 days for synthesis.

Weeks 9-8: Concept Development. Internal work. Use consumer language and needs framework to refine 2-3 concept directions. Develop positioning statements and rough visual concepts.

Weeks 7-6: Positioning Test Sprint. Launch 100-120 interviews testing concept variants. Include competitive context. Test across 2-3 consumer segments. Synthesize into a recommended positioning with supporting evidence. Total elapsed time: 5-7 days for fieldwork, 3-4 days for synthesis.

Weeks 5-4: Final Development. Refine packaging, finalize pricing, prepare shelf-ready materials based on positioning research outputs.

Weeks 3-2: Launch Readiness Sprint. Launch 60-80 interviews with shelf simulation. Test final packaging, pricing, and purchase context. Identify any remaining friction points. Total elapsed time: 5-7 days for fieldwork, 2-3 days for synthesis.

Week 1: Go/No-Go Decision. Review cumulative evidence from all three phases. The research should provide clear, evidence-traced answers to three questions: Is the need real? Is the positioning compelling? Will consumers find, understand, and buy this product in its actual purchase context?

This calendar is only feasible with AI-moderated research that runs hundreds of conversations in days. Traditional qualitative would require 4-6 weeks per phase, stretching the total timeline to 6+ months and forcing most CPG companies to skip phases or compress them into superficial exercises. The speed advantage is not about cutting corners — it is about fitting rigorous, iterative research into the timelines that CPG development actually operates on.

The difference between a successful launch and a failed one is rarely the product itself. It is whether the team understood the consumer deeply enough to position, package, and price the product in a way that makes sense within the consumer’s existing category frame. That understanding comes from research — but only if the research is structured, iterative, and grounded in conversations with the right consumers.

Frequently Asked Questions

The most common cause of CPG launch failure isn't poor product quality—it's misaligned positioning, insufficient consumer demand for the specific benefit the product delivers, or an underestimated barrier to trial. These are all research-detectable problems: positioning misalignment surfaces in concept testing, demand validation comes from verified category buyers rather than employees, and trial barriers appear in purchase occasion interviews. The failure rate reflects how rarely pre-launch research is conducted with sufficient rigor and sample size to catch these problems.
Consumer language research captures the specific words and phrases target shoppers use to describe the problem the product solves, the benefit they expect, and the category they're shopping in. These verbatims become the raw material for packaging claims, ad headlines, and retailer pitch deck language—because they're already pre-tested for resonance with the target consumer. Internal team language, however precise and internally logical, has never been validated against real consumer response.
Verified category purchasers are consumers with confirmed recent purchase history in the relevant category—distinguished from general population panels where respondents may have no actual experience with the category being studied. Using verified purchasers means concept responses come from people who understand the category context, have price reference points, and know the competitive alternatives—producing research that predicts market behavior rather than responses from hypothetical consumers evaluating unfamiliar territory.
User Intuition's 48-72 hour study turnaround means the 90-day pre-launch research calendar can include multiple sequential research phases rather than one large study. Teams can field concept validation, refine based on findings, test messaging iterations, and validate final claims in distinct waves within the 90-day window—using User Intuition's verified panel of category purchasers to ensure findings reflect real market conditions.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours