← Reference Deep-Dives Reference Deep-Dive · 7 min read

Concept Testing for Packaging Redesign vs. New Packaging

By Kevin, Founder & CEO

Redesign Is Not the Same Problem as New Design


When a brand creates packaging for a new product, the design team starts from zero. There is no existing equity, no shelf recognition, no customer expectation. The test question is simple: does this packaging communicate what we need it to communicate?

Redesign is a fundamentally different problem. The current packaging — however dated or imperfect — has accumulated assets over time:

  • Shelf recognition. Customers find your product by its visual signature. Change that signature and you risk becoming invisible at shelf.
  • Trust cues. Color, typography, and layout patterns signal category, quality tier, and brand identity. Redesigning these elements can inadvertently shift perceived positioning.
  • Emotional attachment. Loyal customers have a relationship with the packaging they know. Redesign can feel like betrayal, even when the new design is objectively stronger.

The testing methodology must account for these dynamics. Treating a redesign test like a new design test is how brands destroy equity they spent years building.

The Incumbency Advantage


Current packaging has what researchers call the incumbency advantage — it benefits from familiarity, recognition, and established associations. This advantage is real and measurable, but it can mislead if not handled carefully in testing.

In a direct comparison, the current design will almost always score higher on “familiarity” and “recognition” metrics. This is tautological — people recognize what they have seen before. The question is whether those familiarity scores are masking genuine design weaknesses or accurately reflecting brand equity worth preserving.

How to Separate Equity from Inertia

The distinction matters: equity is the accumulated positive association that drives preference and purchase. Inertia is mere familiarity that creates resistance to change without adding value.

Three diagnostic signals help distinguish them:

  1. Attribute-level analysis. If the current packaging scores high on “I would notice this on shelf” but low on “this looks premium” or “this clearly communicates what the product does,” the recognition score reflects inertia, not equity.
  2. Unaided description. Ask participants to describe what the current packaging communicates. If their descriptions align with your intended positioning, there is genuine equity. If they struggle or describe something different from your strategy, the familiarity is not translating into meaningful brand communication.
  3. New customer reactions. People who have never purchased your product have no incumbency bias. Their reactions to the current packaging reveal its standalone communication power, separate from accumulated familiarity.

Why Monadic Design Is Critical for Redesigns


The single most important methodological decision in redesign testing is using a monadic design — showing each participant only one version of the packaging.

The Problem with Comparative Testing for Redesigns

When participants see both the current and new design, comparison effects dominate their evaluation:

  • They focus on what changed rather than evaluating each design holistically
  • The familiar design gets an automatic “comfort” boost
  • Minor differences get amplified while the overall design impression gets lost
  • Stated preference becomes unreliable because participants are answering “which do you like more?” rather than “does this packaging work?”

How Monadic Testing Works

Split your sample into separate cells. One group sees only the current packaging. Another group sees only the redesign. Each group evaluates the design they see on the same set of metrics — appeal, communication clarity, quality perception, purchase intent, brand fit.

The comparison happens in analysis, not in the participant’s mind. This produces cleaner data because each design is evaluated on its own terms.

Design ElementCurrent Packaging (Cell A)Redesign (Cell B)
Overall appeal7.27.8
Communication clarity6.18.3
Perceived quality7.56.9
Purchase intent6.87.1

In this example, the redesign improves communication clarity substantially but loses ground on perceived quality. That tradeoff would be invisible in a comparative test where participants simply pick their preference.

AI-moderated depth interviews add a critical layer here. Beyond the ratings, the moderator probes why participants react as they do — uncovering, through 5-7 levels of laddering, what specific visual elements drive the perceived quality gap and whether the communication gains justify the quality perception risk.

Testing with Loyal Customers vs. New Prospects


Redesign testing must include both segments, but their reactions serve different strategic purposes.

Loyal Customer Testing

Loyal customers are your redesign risk gauge. They have existing expectations and emotional connections to the current packaging. Their reactions reveal:

  • Equity disruption. What do they miss from the old design? Which elements carried meaning you did not realize?
  • Change tolerance. How much visual distance from the current design is acceptable before it feels like a different brand?
  • Feature recognition. Can they still find the product on shelf? Does the redesign preserve the visual cues they use to locate your product?

Loyal customers will almost always express some resistance to redesign. The question is whether the resistance is principled (the redesign genuinely lost something important) or reflexive (they dislike change but will adapt).

New Prospect Testing

New prospects are your redesign opportunity gauge. They have no attachment to the current design and evaluate the redesign purely on its communication and appeal. Their reactions reveal:

  • Standalone strength. Does the redesign work for someone seeing it for the first time?
  • Category cues. Does the redesign clearly signal what category the product belongs to?
  • Competitive differentiation. Does the redesign stand out from competitive packaging in the same space?
  • Quality and price tier signaling. Does the redesign communicate the intended positioning?

The strategic sweet spot is a redesign that excites new prospects without alienating loyal customers. Segment-level analysis makes this tradeoff visible and quantifiable.

Shelf Simulation in Concept Testing


Packaging does not exist in isolation. It sits on a shelf (physical or digital) surrounded by competitors. Testing packaging redesigns without shelf context produces misleading results.

Why Shelf Context Matters

A redesign that looks striking in isolation may disappear on shelf. Conversely, a design that seems subtle in isolation may pop in competitive context because it breaks the visual pattern of the category.

How to Incorporate Shelf Simulation

During AI-moderated interviews, present the packaging in context:

  1. Competitive shelf set. Show the design alongside 5-8 competitive products in a realistic shelf arrangement. Ask participants what they notice first, what draws their attention, and what they would pick up.
  2. Find-ability test. For loyal customers, show the redesigned packaging in a shelf set and measure how quickly they identify your product. Significant findability reduction is a red flag.
  3. Category convention analysis. Ask participants to sort the shelf set by quality tier or price level. Where does your redesign land? If it shifted from “premium” to “value” in participants’ mental models, the redesign has a positioning problem.

The Turning Point Brands Case: When “Premium” Reads as “Cheap”


A particularly instructive example of redesign risk comes from packaging testing that revealed unexpected positioning shifts. A brand invested in a redesign intended to signal premium quality. The design team used clean lines, minimalist typography, and a refined color palette — all design language associated with premium positioning.

In concept testing, participants read these cues differently. The minimalism was interpreted as “generic” and “store brand.” The clean lines that signaled “premium” to the design team signaled “cheap” to consumers who associated premium with rich detail, embossing, and visual complexity in that particular category.

This disconnect would have been invisible in a focus group where participants see the design team’s excitement and adjust their feedback accordingly. In AI-moderated depth interviews, where there is no moderator energy to read, participants gave unvarnished reactions that revealed the positioning mismatch before the brand committed to production.

The lesson: design intent does not equal consumer perception. What reads as premium in one category may read as budget in another. Only consumer testing reveals how the visual language actually lands.

A Practical Redesign Testing Protocol


Phase 1: Equity Audit (Before Redesign Begins)

Test the current packaging with both loyal customers and new prospects to establish a baseline. This audit reveals what the current design does well (equity to preserve) and where it falls short (opportunity for the redesign to improve).

Phase 2: Direction Screen

Test 2-3 redesign directions monadically against the current design. Four cells total. Identify which direction best preserves equity while improving on the current design’s weaknesses.

Phase 3: Refinement

Take the winning direction and refine based on Phase 2 feedback. Test the refined version monadically against the current design with fresh participants.

Phase 4: Shelf Validation

Test the final redesign in shelf context with competitive products. Validate findability with loyal customers and standout appeal with new prospects.

At $20 per interview, this four-phase protocol costs a fraction of the production and marketing expense that a poorly tested redesign would waste. The CPG concept testing guide covers additional methodology for packaging decisions, and User Intuition’s concept testing solution supports the monadic test designs and segment-level analysis that redesign testing demands.

Frequently Asked Questions

New packaging is evaluated by consumers on absolute merit because they have no prior relationship with it. Packaging redesigns are evaluated against an existing equity baseline, where loyal customers have formed associations and expectations that the new design may violate. Research that doesn't account for this incumbency advantage will systematically underestimate the risk of redesigns that test well in isolation but erode loyalty in market.
Paired comparison tests expose the same respondent to both old and new packaging, which artificially highlights differences and biases responses toward novelty. Monadic designs show separate matched samples only the old or the new design, measuring response to each as it would actually be experienced in market. Redesigns tested monadically consistently show more conservative performance relative to paired comparison tests, which is a more accurate predictor of actual market performance.
Loyal customers are the population most exposed to redesign risk because they have the strongest prior associations with existing packaging cues. New prospects evaluate redesigns more favorably because they have no equity to lose. Research that samples only the general population or only new prospects will systematically underestimate redesign risk for the brand's core customer base. Segment-specific analysis of loyal versus prospect response is essential for accurate risk assessment.
User Intuition's platform supports separate monadic samples across loyal and new prospect segments from a 4M+ panel in 48-72 hours, enabling brands to run the methodologically correct redesign test architecture without the 8-week timelines of traditional research. At $20 per interview, brands can test multiple design variants and segmentation cuts within a single sprint at a fraction of traditional agency research costs. The speed enables iterative testing between design rounds rather than a single high-stakes test before launch.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours