MaxDiff tells you what people prefer. Conjoint tells you how they trade off features against each other. Neither tells you why.
That gap between quantifying preference and understanding motivation is where most product and pricing strategies break down. A conjoint study might reveal that customers value battery life over screen size at a 2:1 utility ratio. But it cannot tell you that battery life anxiety stems from a specific commuting pattern, or that screen size only matters during a narrow use case that your product team has never considered. The numbers are precise. The context behind them is missing.
AI-moderated interviews exist to fill that context gap, not to replace the numbers. The research teams producing the strongest work in 2026 — whether they come from qualitative or quantitative traditions — are using both methods together in a deliberate sequence.
What Do MaxDiff and Conjoint Analysis Actually Measure?
MaxDiff, or maximum difference scaling, is an elegant solution to a specific problem: people are bad at rating things on absolute scales, but good at identifying extremes. Instead of asking respondents to rate ten features on a 1-to-7 scale — the kind of survey-based method where everything clusters around 5 —, MaxDiff presents subsets of three to five items and asks “which matters most?” and “which matters least?” Across many such comparisons, the method produces an interval-scale ranking that shows the relative distance between items, not just their order.
The output is a ratio-scaled importance score for every attribute tested. Feature A is not just “more important than” Feature B. It is 2.3 times more important. That precision makes MaxDiff extremely useful for prioritization decisions, roadmap planning, and concept testing.
Choice-based conjoint analysis goes further. Instead of evaluating attributes in isolation, conjoint presents realistic product configurations, bundles of features at specific levels, and asks respondents to choose their preferred option. By systematically varying which features appear at which levels across many choice tasks, conjoint decomposes the total utility of each product configuration into part-worth utilities for every feature level.
This decomposition is what makes conjoint uniquely powerful. It answers questions like: how much additional willingness-to-pay does a 4-hour battery improvement generate? If we add Feature X but remove Feature Y, what happens to preference share? Which product configuration maximizes revenue across three price tiers?
Conjoint can also run market simulations, modeling how shifts in your product configuration or a competitor’s would redistribute market share. For pricing strategy, product configuration, and competitive positioning, conjoint analysis is one of the most rigorous tools available.
Where Do Discrete Choice Methods Hit Their Limits?
The power of MaxDiff and conjoint comes with structural constraints that research teams must account for.
You can only measure what you include. Both methods require pre-defined attribute lists. A conjoint study measures the utility of battery life, screen size, camera quality, and price because those attributes were written into the study design. If the actual purchase driver is something your team never considered, such as the emotional meaning of a brand’s sustainability positioning or the influence of a child’s opinion on the decision, the conjoint will produce precise results that miss the most important variable.
This is the unknown unknowns problem. Discrete choice methods are confirmation tools. They quantify the importance of hypothesized attributes. They do not discover new ones. Other product testing methods share this same constraint whenever they rely on pre-defined stimuli.
The “why” is invisible. A conjoint study might show that Segment A has 40% higher willingness-to-pay for Feature X than Segment B. But the study cannot explain that difference. Is it because Segment A uses the product in a specific environment? Because they had a bad experience with a competitor’s version of that feature? Because the feature maps to an identity they are constructing? Without the qualitative context, the quantitative finding is actionable but shallow.
Expertise and cost barriers are real. A well-designed conjoint study requires specialized survey design (choosing the right attributes, levels, and experimental plan), custom programming, fielding with 300 to 1,000 respondents, and statistical analysis using hierarchical Bayesian estimation or latent class models. The full engagement typically costs $25,000 to $75,000 and takes three to six weeks from design to deliverable. MaxDiff is less complex but still requires careful attribute selection and sample sizes of 200 to 400.
Attribute wording matters enormously. The language used to describe each feature level directly affects how respondents interpret and evaluate it. “Long battery life” versus “12 hours between charges” versus “lasts your entire workday” will produce different utility estimates for the same underlying attribute. Without understanding how customers naturally think about and describe these features, conjoint designers are making consequential framing decisions with limited information.
How Do AI-Moderated Interviews Fill the Qualitative Gap?
AI-moderated interviews operate at two critical points in the research workflow: upstream of conjoint design and downstream of conjoint results.
Upstream: discovering what to measure. Before locking a conjoint design, running 50 to 100 AI-moderated interviews across your target segments surfaces the decision criteria customers actually use, in their own language, including criteria your team would not have hypothesized. These interviews explore the decision-making context: what triggered the purchase consideration, who influenced the decision, what alternatives were evaluated and why, what emotional associations surround each feature.
At $20 per interview with results in 48-72 hours, this upstream discovery phase typically costs $1,000 to $2,000 and delivers a validated attribute list, natural language framings for each attribute level, and preliminary hypotheses about segment differences. That investment dramatically improves the return on a $50,000 conjoint study by ensuring the conjoint measures the right things.
Downstream: explaining what the numbers mean. After conjoint results reveal that a specific feature bundle wins in Segment A but loses in Segment B, targeted AI interviews with participants from each segment explain the context behind the numbers. Why does Segment A value this combination? What experience or need makes Feature X more important to them? Why does the winning bundle fail to resonate with Segment B?
This qualitative explanation layer transforms conjoint outputs from statistical tables into actionable product narratives. Product teams can design with empathy for the motivations behind the preference data, not just the preferences themselves.
User Intuition runs these interviews across a panel of 4M+ participants in 50+ languages with 98% participant satisfaction, making it possible to match the exact demographic and behavioral profiles used in your conjoint sample.
Comparing the Two Approaches
| Dimension | MaxDiff / Conjoint | AI-Moderated Interviews |
|---|---|---|
| Research question | What do people prefer? How do they trade off? | Why do they prefer it? What context drives the tradeoff? |
| Cost per study | $25,000 - $75,000 | $20 per interview (typically $1,000 - $4,000 per study) |
| Timeline | 3-6 weeks | 48-72 hours |
| Sample size | 200 - 1,000 respondents | 30-50 for saturation (scalable to hundreds) |
| Quantitative rigor | High (statistical significance, utility estimation) | Low (qualitative themes, not statistical inference) |
| Qualitative depth | None (closed-ended choice tasks) | High (5-7 levels of emotional laddering) |
| Ability to discover unknowns | None (pre-defined attributes only) | High (open-ended exploration surfaces new variables) |
| Statistical significance | Yes (confidence intervals, p-values) | No (thematic saturation, not statistical inference) |
| Emotional context | Not captured | Core strength (motivations, identity, experience) |
| Market simulation | Yes (share-of-preference modeling) | No (directional insights, not predictive models) |
Why Should You Use Both Together?
The highest-value research workflow sequences three phases:
Phase 1: Qualitative discovery. Run 50 to 100 AI-moderated interviews across target segments. Objective: map the decision landscape, discover unanticipated attributes, understand emotional context, and capture the language customers naturally use when describing features and tradeoffs. Duration: 48-72 hours. Cost: approximately $1,000 to $2,000.
Phase 2: Quantitative measurement. Design a MaxDiff or conjoint study using the attribute list and framings validated in Phase 1. Field with 300 to 1,000 respondents. Objective: produce statistically significant importance rankings, part-worth utilities, willingness-to-pay estimates, and market simulations. Duration: three to six weeks. Cost: $25,000 to $75,000.
Phase 3: Qualitative explanation. Run targeted AI-moderated interviews with 30 to 50 participants selected to represent the key segments identified in Phase 2. Objective: explain why specific feature bundles won or lost, what drives willingness-to-pay differences across segments, and what product narratives resonate with each group. Duration: 48-72 hours. Cost: approximately $600 to $1,000.
The total investment adds $1,600 to $3,000 and a few days to a conjoint program, but the quality of the output improves dramatically. Phase 1 ensures the conjoint measures the right attributes. Phase 3 ensures the team understands and can act on the results.
This is not a theoretical recommendation. Product teams that run qualitative bookends around their quantitative studies consistently report better attribute selection, fewer surprises in results interpretation, and faster translation of findings into product decisions.
When Is One Sufficient Without the Other?
Not every research question requires the full three-phase workflow.
Conjoint alone works when: You are operating in a well-understood product category with stable, validated attributes. The team has deep category expertise and recent qualitative data. The research question is narrowly focused on pricing optimization or feature-level tradeoffs within a known set. Simple line extensions in mature categories, A/B pricing decisions, and competitive benchmarking against known feature sets can often proceed with standalone conjoint.
AI-moderated interviews alone work when: You are in early-stage exploration where the attribute space itself is undefined. Jobs-to-be-done research, category entry studies, and new market exploration are better served by open-ended qualitative discovery than by premature quantification. If you are trying to understand a market you have not yet entered, running a conjoint with hypothesized attributes risks producing precise answers to the wrong questions.
AI-moderated interviews are also sufficient for ongoing voice-of-customer programs, churn analysis, win-loss research, and brand perception tracking where the goal is continuous understanding rather than point-in-time quantification.
The decision rule is straightforward: if you already know the relevant variables and need to measure them precisely, conjoint works alone. If you do not know the variables yet, AI interviews come first. If you need both measurement and understanding, sequence all three phases.
From the User Intuition team: AI-moderated interviews surface the attributes your conjoint should test — and explain why your winning feature bundle resonates. Use them upstream and downstream of discrete choice analysis.