← Insights & Guides · Updated · 6 min read

How to Run Consumer Research from Claude Code (Step-by-Step)

By Kevin, Founder & CEO

Claude Code can talk to real people. Not simulated personas, not training data, not synthetic responses — actual consumers from a 4M+ vetted global panel who will tell you what they think about your headlines, claims, and messaging.

One line of config. Results in 2-3 hours. Studies from ~$200.

This guide walks through setup, three real workflow examples, and the cost management features you should know about.

Setup (60 Seconds)

Option 1: Config File

Add to your ~/.claude.json or project .mcp.json:

{
  "mcpServers": {
    "userintuition": {
      "url": "https://mcp.userintuition.ai/mcp"
    }
  }
}

Option 2: CLI

claude mcp add userintuition --transport http https://mcp.userintuition.ai/mcp

That’s it. On first use, you’ll get an OAuth prompt to authenticate with your User Intuition account (free tier: 3 interviews, no credit card).

For more on the consumer research API and what the responses look like, see the API guide with example calls and responses.

Verify It Works

After adding the server, ask Claude Code:

What User Intuition tools do you have available?

It should list five tools: ask_humans, get_results, list_studies, edit_study, and cancel_study.

What You Can Test

Three research modes, each designed for a different question:

ModeWhen to UseWhat You Get Back
preference_check”Which of these 3 headlines should we use?”Preference distribution, reasons per option, minority dissent with real quotes
claim_reaction”Will people believe ‘Cut onboarding time by 60%’?”Agreement score, credibility data, skepticism triggers, recommended edits
message_test”What does this landing page copy actually promise?”Clarity score, implied promise clusters, confusion drivers, gap analysis

Workflow 1: Validate Headlines Before Launch

You’re building a landing page and have three headline options. Instead of guessing or asking your team (who are too close to the product), ask 25 strangers.

Prompt:

I have three headline options for our product landing page. Run a preference
check with 25 people to see which resonates most:

1. "Ship features your customers actually want"
2. "Stop guessing what customers think"
3. "Customer research in hours, not months"

Claude Code creates a preference_check study. Real people from the panel see all three options, pick their favorite, and explain why in an AI-moderated conversation that probes 5-7 levels deep.

2-3 hours later, ask:

Check results for my headline study.

What you get back:

  • Option 3 won with 48% preference. Key driver: specificity of “hours, not months” created believable contrast.
  • Option 1 came second (32%). People liked the outcome focus but found “actually want” slightly condescending.
  • Minority view (20%): Option 2 resonated with people who’d been burned by bad assumptions. Quote: “This one gets it — I’ve wasted months building the wrong thing.”
  • Recommended edit: Keep Option 3’s time contrast but soften “hours” to “days” if you can’t actually deliver in hours.

That’s not a survey checkbox. That’s depth research with real reasoning, delivered while you’re still in your coding session. (For the broader picture of how MCP connects AI agents to research, see MCP for market research.)

Preference checks with 25 participants cost approximately $200-500 and return results in 2-3 hours — including preference distributions, qualitative reasoning per option, minority dissent with real quotes, and actionable recommendations.

Workflow 2: Test a Product Claim

Your marketing team wants to put “Trusted by 10,000+ teams” on the homepage. Will people actually believe it?

Prompt:

Run a claim reaction test on this statement: "Trusted by 10,000+ teams
worldwide." I want to know if people find it credible or if it triggers
skepticism. Use 30 participants.

Results:

  • Agreement: 4.8/7. Moderate credibility.
  • 42% found it credible — “10,000 is specific enough to feel real.”
  • 31% skeptical — main trigger: “worldwide” felt like it was trying too hard. Quote: “Every startup says worldwide. Show me where.”
  • Recommended edit: Replace “worldwide” with a specific region or use case: “Trusted by 10,000+ product teams” or “Trusted by 10,000+ teams across 40 countries.”

Workflow 3: Message-Test Your Positioning

You’ve rewritten your product positioning and want to know what people actually take away from it — before you ship.

Prompt:

Message-test this copy with 50 people:

"The only research platform where AI conducts the interviews, your panel
does the talking, and every conversation compounds into a searchable
intelligence hub. Launch a study in minutes. Get depth insights in hours."

Results:

  • Clarity score: 7.1/10.
  • 62% correctly understood the core promise: AI-powered research that builds over time.
  • 23% thought it was a survey tool — the word “launch” triggered survey associations.
  • 15% focused on “intelligence hub” and assumed it was an analytics product, not a research tool.
  • Confusion driver: “your panel does the talking” was ambiguous — some thought you needed to bring your own panel.
  • Recommended edits: (1) Replace “your panel” with “real consumers” to clarify sourcing. (2) Replace “Launch a study” with “Start a conversation” to distance from survey framing.

Cost Management: Always Use Dry Run First

Before launching any study, get a cost estimate:

Prompt:

How much would it cost to run a preference check with 50 people on
three pricing page options?

Claude Code uses dry_run: true under the hood and returns:

  • Estimated cost: ~$400 (50 interviews at standard rates)
  • Estimated time: 2-3 hours
  • No credits spent

Only after you confirm does the study launch. This is the most important habit to build — always estimate before you commit.

Pricing Quick Reference

PlanCost
Free tier3 interviews, no credit card
Pay-as-you-goChat $10, Audio $20, Video $40 per interview
Professional ($999/mo)50 interviews included, then standard rates
Typical study (25 people)~$200-500 depending on interview type

Five Tips for Better Agent-Driven Research

1. Be specific about what you’re testing. “Test this copy” is fine. “Test whether this copy communicates speed or reliability as the primary benefit” is better. The more specific your question, the more actionable the results.

2. Use dry_run for every new study type. Until you have a feel for costs, estimate first. It takes two seconds and prevents surprises.

3. Default to 25 participants for quick checks. That’s enough for clear signal on preference checks and claim reactions. Go to 50+ for high-stakes decisions like rebrands or pricing changes.

4. Check past research before launching new studies. Ask Claude Code: “List my completed studies.” Every study compounds in the intelligence hub — you might already have the answer.

5. Don’t bury the minority view. When results come back, the 15% who disagreed are often the most useful signal. They’re the edge case that becomes a support ticket, the objection that kills a deal, the confusion that tanks your conversion rate. (This is exactly why AI agents need real consumer data — training data can’t surface these edge cases.)

What Happens Under the Hood

When you ask Claude Code to run a study:

  1. Claude Code calls ask_humans via MCP with your stimuli, mode, and sample size.
  2. User Intuition recruits participants from a 4M+ vetted global panel matching your criteria.
  3. AI-moderated conversations happen — each participant gets a 5-7 level deep interview exploring their reactions, reasoning, and emotions.
  4. Results are synthesized into structured findings: metrics, themes, minority views, and recommended edits.
  5. Claude Code retrieves results via get_results and presents them in your terminal.

The MCP connection is the bridge. The research infrastructure — panel, moderation AI, analysis engine, intelligence hub — is User Intuition’s platform doing the heavy lifting.

Get Started

  1. Sign up for free (3 interviews, no credit card)
  2. Add the MCP config above
  3. Ask Claude Code to run your first preference check

Your agent just became the first one in your stack that can ask real people what they think. Every study you run compounds into a searchable intelligence hub that gets smarter over time.

Related: Consumer Research API: Full Call/Response Examples | MCP for Market Research: Complete Guide | Agentic Research Platform

Server URL: https://mcp.userintuition.ai/mcp Docs: docs.userintuition.ai/integrations/mcp-server Developer Quick Start: research.userintuition.ai

Frequently Asked Questions

Yes. Claude Code connects to User Intuition's MCP server and can launch real studies with real people. It handles the full lifecycle: creating the study, recruiting from a 4M+ vetted panel, conducting AI-moderated interviews, and returning structured results — all within your terminal session.
Add one entry to your MCP config. In ~/.claude.json or your project's .mcp.json, add: {"mcpServers": {"userintuition": {"url": "https://mcp.userintuition.ai/mcp"}}}. Or run: claude mcp add userintuition --transport http https://mcp.userintuition.ai/mcp. OAuth prompts on first use.
Studies start at approximately $200 for 25 participants. Use dry_run: true to get an exact cost estimate before launching. The Professional plan ($999/month) includes 50 free interviews. There's no additional charge for the MCP connection itself.
Most studies return results in 2-3 hours. The MCP connection itself is instant — the time is for real people to participate in AI-moderated conversations. You can check status anytime by asking Claude Code to retrieve results.
Three modes: preference_check (compare 2-5 options, see which people prefer and why), claim_reaction (test whether people believe a specific claim), and message_test (test what your copy actually communicates vs. what you intended). Each returns structured findings with real participant quotes.
Yes. Always use dry_run: true first. Ask Claude Code: 'How much would it cost to run a preference check with 50 people?' The agent will return an estimated cost and timeline without spending any credits.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours