← Insights & Guides · 9 min read

Why Synthetic Panels Can't Replace Real Customers (And What Can)

By Kevin, Founder & CEO

Part 4 of the series: The Customer Truth Layer for AI Agents

The agent economy is creating enormous demand for customer signal at machine speed. Every AI agent making customer-facing decisions — writing copy, prioritizing features, personalizing experiences — needs to know what real people think. And it needs to know fast.

For teams under pressure to ship agent-powered products, synthetic panels and digital twins offer an appealing shortcut: instant customer feedback generated by AI, at zero marginal cost, with infinite scalability. No recruitment. No waiting. No budget constraints. Just ask the model to simulate your target audience and generate responses.

Some teams are already doing this. Product agents feed LLM-generated “customer responses” into their decision workflows. Marketing agents test copy against synthetic personas before launch. Research teams run parallel studies — one synthetic, one real — hoping the results converge.

The convergence, when it happens, is misleading. And when it does not happen, teams rarely discover the divergence until a customer-facing decision has already been made on synthetic ground.

What Synthetic Panels Actually Do

A synthetic panel works by prompting a language model to respond as if it were a specific type of person: “You are a 35-year-old enterprise buyer evaluating project management software. How would you respond to this pricing page?”

The model generates a response that sounds plausible. It uses the right vocabulary. It references realistic concerns. It even simulates hesitation and uncertainty. The output reads like a real person’s reaction.

But the model is not simulating a real person. It is generating the most likely response based on patterns in its training data — patterns derived from millions of texts written by or about people in vaguely similar demographics. The response is a statistical remix, not an observation. It predicts what a person in that category might say, not what any specific person actually thinks.

This distinction matters because the value of customer research lies precisely in the gap between “might say” and “actually thinks.” The unexpected reactions, the surprising objections, the minority perspectives that diverge from the average — these are the signals that change decisions. Synthetic panels, by construction, cannot generate genuine surprise. They can only recombine existing patterns.

Where Synthetic Fails

The failure modes of synthetic panels are systematic, not random. They do not produce occasionally wrong answers — they produce consistently biased answers in predictable directions.

Missing Genuine Emotion

Real people have emotional reactions that emerge from lived experience: the frustration of a product that wasted their time, the skepticism born from being burned by a similar promise, the excitement of seeing a problem they have struggled with finally addressed. These reactions carry signal about how messaging will land in the real world.

Synthetic respondents generate emotional language without emotional experience. A synthetic persona can write “this pricing makes me nervous” because it has seen that pattern in training data. But the nervousness is not grounded in having actually evaluated prices, managed a budget, or felt the pressure of justifying a purchase to a boss. The words are correct. The signal is hollow.

This matters most for messaging decisions. An agent testing taglines against synthetic personas will get responses that identify the right emotional categories (trust, clarity, excitement) without the intensity calibration that comes from real stakes. The synthetic panel cannot tell you that your pricing page creates genuine anxiety — it can only tell you that pricing pages in general sometimes create anxiety.

Cultural and Contextual Blindness

Language models are trained predominantly on English-language text with significant representation from North American and European contexts. Synthetic panels built on these models inherit and amplify this bias.

When synthetic personas are asked to react to messaging in markets with different cultural norms — different relationships to authority, different communication preferences, different decision-making patterns — the responses reflect training data distributions rather than actual cultural reality. A synthetic “Japanese enterprise buyer” is a statistical composite of what English-language texts say about Japanese enterprise buyers, filtered through Western analytical frameworks.

Real conversations with real people surface contradictions, nuances, and culturally coherent perspectives that statistical composites flatten. The difference between synthetic cultural representation and genuine cultural insight can mean the difference between messaging that resonates and messaging that alienates.

No Minority Voice

This may be the most consequential failure. Synthetic panels, by construction, over-represent the average and under-represent the edges. When a model generates responses for “your target audience,” it produces responses that reflect the central tendency of that audience — the majority view, the common reaction, the expected preference.

But customer decisions are often shaped by minority perspectives. The 15% who hate your preferred headline might be your most vocal segment on social media. The 8% who find your claim unbelievable might be the exact buyer persona you are trying to reach. The 22% who interpreted your message differently than you intended might represent an emerging market shift that the majority has not caught up with.

Real conversations surface these perspectives naturally. Human Signal explicitly reports minority objections with verbatim evidence, treating them as signal rather than noise. Synthetic panels cannot surface perspectives that diverge from the training data’s central tendency — by definition, they generate the most likely response, not the most important one.

False Precision

Perhaps the most dangerous feature of synthetic panels is that they produce precise-looking numbers. “67% of synthetic respondents preferred Option A.” The number is specific, the format is familiar, and the result looks identical to a real preference split.

But the number is not a measurement. It is a prediction. When the model generates responses for 100 synthetic personas and 67 prefer Option A, that number reflects the model’s probability distribution over training data, not an observed preference among real people. Run the same synthetic panel again with slightly different random seeds and you might get 71% or 62% — the variance is in the model, not in any real population.

Teams that treat synthetic percentages like survey data are making decisions on fabricated precision. The confidence interval around a synthetic preference split is unknowable because there is no real population being sampled.

The Ground Truth Advantage

Every finding from a real conversation traces back to a real person with real stakes, real experience, and real emotional reactions. This traceability is not just methodological rigor — it is the foundation of trustworthy decision-making.

When your agent receives a Human Signal result showing that 72% of real participants preferred Option A, that number represents actual observed behavior: 72% of recruited, vetted, real people who engaged in AI-moderated conversations expressed a genuine preference. The themes driving that preference came from real language used by real people describing real reactions. The minority objections represent real concerns held by real individuals.

This matters for agents making consequential decisions. An agent that writes copy based on synthetic preferences optimizes for an audience that does not exist. An agent that writes copy based on real human signal optimizes for people who actually buy the product.

The evidence trail also enables a different kind of organizational confidence. When a stakeholder asks “why did the agent choose this messaging?”, the answer can include specific quotes from specific participants — not “the model predicted this would resonate” but “real customers said this, in these words, for these reasons.”

Speed Without Sacrificing Truth

The false binary at the heart of the synthetic panel argument is “real but slow” versus “synthetic but fast.” If real customer research takes 4-8 weeks and synthetic responses take seconds, the tradeoff seems obvious for agent workflows that need signal in hours.

But that binary is outdated. AI-moderated conversations with real people now deliver structured results in 2-3 hours. The bottleneck that made traditional research slow — recruiting participants, scheduling conversations, conducting interviews, analyzing transcripts — has been compressed by the same AI technology that powers the agents consuming the results.

Here is what happens: your agent describes what it needs to learn. The system recruits participants from a vetted global panel of 4M+ people (or from your own first-party customers). AI-moderated conversations explore the question with laddering depth, following each participant’s responses 5-7 levels deep. The results are analyzed and delivered as structured Human Signal — headline metrics, driving themes, minority objections, and verbatim evidence.

The entire cycle — from question to actionable result — completes in under 3 hours. Not 4-8 weeks. Not days. Hours.

This is fast enough for agent workflows. An agent writing marketing copy can test three headline options with real people during the same work session. An agent prioritizing features can validate assumptions before the sprint planning meeting. An agent handling a sensitive customer communication can check whether the proposed tone lands the way it intends.

The speed/truth tradeoff no longer exists. Real human signal is available at agent-compatible speed.

When Synthetic Has a Role

Intellectual honesty requires acknowledging where synthetic approaches add legitimate value.

Hypothesis generation. Before investing in a real study, synthetic responses can help teams brainstorm possible reactions, identify potential objections, and sharpen the questions worth asking real people. Using a model to simulate “what might customers think about this?” is a reasonable way to prepare for research — as long as the simulation is treated as preparation, not as evidence.

Instrument testing. Synthetic panels can stress-test survey designs, identify ambiguous questions, and flag potential issues with study structure before real participants see them. This saves time and improves the quality of real research.

Scenario exploration. When the cost of being wrong is low and the question is directional, synthetic responses can provide useful starting points. Internal brainstorming, early-stage exploration, and low-stakes content ideation are reasonable applications.

The line is clear: any decision that touches real customers should be grounded in real human signal. Synthetic approaches are useful for thinking. Real conversations are necessary for deciding.

For a comprehensive decision framework on when to use AI moderation, synthetic panels, and human participants across different research contexts, see our detailed guide: Synthetic vs Human: When to Trust AI in Research.

The Agentic Research Alternative: Real People at Agent Speed

The false binary between “real but slow” and “synthetic but fast” has been resolved. Agentic market research delivers real human feedback at a speed that fits agent workflows, making synthetic panels unnecessary for any decision where the quality of the underlying signal matters.

In an agentic market research workflow, the AI agent launches a study via MCP, real participants respond through AI-moderated conversations that probe 5-7 levels deep, and the agent receives structured Human Signal within 2-3 hours. The output includes real preference splits from real people, genuine objections grounded in lived experience, and verbatim evidence that traces every finding back to what actual humans said.

This is the critical distinction: in agentic market research, the AI is the moderator, not the respondent. The agent orchestrates the research process, but the signal comes from real people with real stakes, real emotions, and real perspectives that diverge from training data averages in the ways that matter most for decisions.

The speed advantage of synthetic panels over traditional research was real. But agentic consumer insights research eliminates that advantage by compressing the real-people research timeline from weeks to hours. Studies start from $200. Scale from 20 to 1,000+ participants. Return structured results that agents can act on programmatically. And every study feeds a Customer Intelligence Hub where findings compound, so future queries are answered even faster.

For teams evaluating whether to use synthetic or real-people approaches, the decision framework is straightforward: use synthetic for hypothesis generation and survey pre-testing, use agentic market research for any decision that affects real customers. For a detailed platform comparison, see the 2026 agentic research tools guide.

Real human signal at agent speed — see how it works →


Series: The Customer Truth Layer for AI Agents

  1. Your AI Agent Is Confidently Wrong About Your Customers
  2. The Agent Stack Is Missing a Layer: Customer Truth
  3. Human Signal: The Data Type Your AI Agent Doesn’t Have
  4. Why Synthetic Panels Can’t Replace Real Customers (And What Can) (you are here)
  5. Compound Intelligence: Why Your Agent Gets Smarter With Every Conversation
  6. Building the Customer Truth Layer: A Technical Guide

Frequently Asked Questions

Synthetic panels use AI to simulate research participants based on demographic and behavioral data. Instead of asking real people, teams ask AI-generated personas to respond to questions. The outputs predict what people might say rather than capturing what they actually think.
Synthetic panels can be useful for early hypothesis generation, stress-testing survey instruments, exploring directional scenarios before investing in real research, and brainstorming possible objections. They should not replace real people for validating decisions that affect customers.
AI-moderated conversations with real people deliver qualitative depth at agent-compatible speed. Studies return structured results in 2-3 hours — with real preference splits, genuine objections, and verbatim evidence from real participants. Fast enough for agent workflows. Grounded enough to trust.
No. Agentic market research uses AI to orchestrate real conversations with real people, returning structured Human Signal from genuine reactions. Synthetic panels use AI to simulate consumer responses from training data. The agent is the moderator in agentic research, not the respondent. The difference is between evidence and prediction.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours