← Insights & Guides · Updated · 7 min read

Consumer Research API for AI Agents

By

Every AI agent stack has APIs for databases, search, code execution, and document processing. But when the agent needs to know what real people think about your headline, your pricing, or your claim — it has nothing to call.

User Intuition’s consumer research API changes that. Through the Model Context Protocol (MCP), any AI agent can programmatically launch real consumer studies and receive structured qualitative results.

This guide shows the actual API calls and responses, so you can see exactly what your agent sends and what it gets back.

The API: Five MCP Tools


The consumer research API exposes five tools through the MCP server at mcp.userintuition.ai/mcp:

ToolPurposeWhen to Use
ask_humansLaunch a new studyAgent needs fresh consumer signal
get_resultsRetrieve study resultsCheck on a running or completed study
list_studiesQuery past researchBefore launching, check what you already know
edit_studyModify an active studyChange parameters before completion
cancel_studyCancel an in-progress studyAbort if no longer needed

API Call: Launching a Preference Check


Here’s what your agent sends when it needs to know which headline real people prefer:

{
  "tool": "ask_humans",
  "parameters": {
    "mode": "preference_check",
    "stimuli": [
      "Ship features your customers actually want",
      "Stop guessing what customers think",
      "Customer research in hours, not months"
    ],
    "sample_size": 25,
    "context": "Landing page headline for a B2B SaaS research platform targeting product managers"
  }
}

What happens next: The platform recruits 25 participants matching the context profile from a vetted 4M+ global panel. Each participant enters an AI-moderated conversation where they see all three options, select their preference, and explain their reasoning through 5-7 levels of probing depth. The AI moderator adapts follow-up questions based on each participant’s responses.

2-3 hours later, the agent calls get_results:

{
  "tool": "get_results",
  "parameters": {
    "study_id": "study_abc123"
  }
}

API Response: What Rich Qual Data Looks Like


This is what the agent receives — structured qualitative data that no survey API can produce:

{
  "study_id": "study_abc123",
  "status": "completed",
  "mode": "preference_check",
  "participants": 25,
  "completion_time": "2h 47m",
  "headline_metric": {
    "winner": "Customer research in hours, not months",
    "distribution": {
      "Customer research in hours, not months": 0.48,
      "Ship features your customers actually want": 0.32,
      "Stop guessing what customers think": 0.20
    }
  },
  "driving_themes": [
    {
      "theme": "Specificity of time contrast",
      "prevalence": 0.71,
      "summary": "The 'hours, not months' framing created a believable, concrete contrast that made the value proposition feel tangible rather than aspirational.",
      "evidence": [
        {
          "participant": "P-007",
          "quote": "Hours not months — that's a real promise. The other two could be any SaaS company."
        },
        {
          "participant": "P-019",
          "quote": "I've been burned by research that takes forever. If you can actually do it in hours, that's the only thing I need to know."
        }
      ]
    },
    {
      "theme": "Outcome focus resonates but feels generic",
      "prevalence": 0.54,
      "summary": "Option 1's 'features customers actually want' spoke to a real pain but the word 'actually' felt slightly condescending to several participants.",
      "evidence": [
        {
          "participant": "P-003",
          "quote": "I like the idea but 'actually want' implies I don't already know my customers. That's a bit insulting."
        },
        {
          "participant": "P-014",
          "quote": "This one gets the job-to-be-done right — I want to build what matters. But 'actually' makes it sound like I've been guessing."
        }
      ]
    }
  ],
  "minority_objections": [
    {
      "theme": "Emotional resonance with pain of guessing",
      "prevalence": 0.20,
      "summary": "A significant minority connected deeply with Option 2's direct acknowledgment of the guessing problem.",
      "evidence": [
        {
          "participant": "P-011",
          "quote": "This one gets it. I've wasted months building the wrong thing because we were guessing. 'Stop guessing' is exactly what I want to hear."
        }
      ]
    }
  ],
  "recommendations": [
    "Lead with Option 3's time contrast — 'hours, not months' is the strongest differentiator",
    "Consider softening 'hours' to 'days' if turnaround isn't consistently sub-24h",
    "Test a hybrid: 'Stop guessing. Get customer research in hours, not months.'"
  ],
  "data_quality": {
    "avg_conversation_depth": 5.3,
    "avg_conversation_length_minutes": 12.4,
    "engagement_score": 0.91,
    "fraud_flags": 0
  }
}

This is what makes the consumer research API different from a survey API. A survey would return {option_a: 12, option_b: 8, option_c: 5}. The consumer research API returns why people prefer what they prefer, the specific objections, the emotional triggers, and actionable recommendations — all traced to real verbatim quotes from real participants.

Notice the data quality block in the response: average conversation depth of 5.3 levels, 12.4-minute conversations, and zero fraud flags. These are not checkbox completions — each participant engaged in a genuine AI-moderated conversation that probed their initial reaction through multiple layers of reasoning. The 98% participant satisfaction rate reflects that people find these conversations engaging rather than tedious, which directly translates to higher-quality signal. And because the platform recruits from a 4M+ vetted global panel across 50+ languages, the participant pool is not constrained to English-speaking, survey-habituated respondents.

API Call: Testing a Claim


When your agent needs to know if a marketing claim is believable:

{
  "tool": "ask_humans",
  "parameters": {
    "mode": "claim_reaction",
    "stimuli": [
      "Trusted by 10,000+ teams worldwide"
    ],
    "sample_size": 30,
    "context": "Homepage social proof claim for a B2B SaaS platform"
  }
}

API Response: Claim Reaction Results


{
  "study_id": "study_def456",
  "status": "completed",
  "mode": "claim_reaction",
  "participants": 30,
  "headline_metric": {
    "agreement_score": 4.8,
    "scale": "1-7",
    "interpretation": "Moderate credibility — believed by a majority but with notable skepticism"
  },
  "driving_themes": [
    {
      "theme": "Number specificity builds credibility",
      "prevalence": 0.42,
      "summary": "10,000 felt specific enough to be real. Round numbers trigger more skepticism.",
      "evidence": [
        {
          "participant": "P-022",
          "quote": "10,000 is specific enough that I believe it. If they said 'thousands' I'd roll my eyes."
        }
      ]
    },
    {
      "theme": "'Worldwide' triggers skepticism",
      "prevalence": 0.31,
      "summary": "The word 'worldwide' felt like marketing overreach. Participants wanted geographic specificity.",
      "evidence": [
        {
          "participant": "P-008",
          "quote": "Every startup says worldwide. Show me where. '40 countries' or 'across North America and Europe' — that I'd believe."
        }
      ]
    }
  ],
  "recommendations": [
    "Keep '10,000+ teams' — the number carries credibility",
    "Replace 'worldwide' with specific geography: '10,000+ teams across 40 countries'",
    "Alternative: '10,000+ product teams' — adding the qualifier makes it more believable and targeted"
  ]
}

The agent now knows: the number works, the word “worldwide” doesn’t, and here’s exactly what to change. That’s intelligence a survey checkbox can’t produce.

API Call: Cost Estimation (Dry Run)


Before committing credits, agents should always estimate costs:

{
  "tool": "ask_humans",
  "parameters": {
    "mode": "preference_check",
    "stimuli": ["Option A copy", "Option B copy", "Option C copy"],
    "sample_size": 50,
    "dry_run": true
  }
}
{
  "dry_run": true,
  "estimated_cost": "$400",
  "estimated_time": "2-3 hours",
  "credits_required": 50,
  "credits_available": 120,
  "note": "No credits spent. Call again without dry_run to launch."
}

API Call: Querying Past Research


Before launching a new study, check what you already know:

{
  "tool": "list_studies",
  "parameters": {
    "status": "completed",
    "limit": 10
  }
}

The response lists completed studies with summaries, so the agent can reference existing findings before spending credits on duplicate research. Every study feeds the Customer Intelligence Hub — compounding intelligence that makes each new study smarter.

Setup: Connecting Your Agent to the API


One-time configuration. The consumer research API uses the MCP standard — no custom SDK, no API keys in your code:

Claude Desktop / Claude Code:

{
  "mcpServers": {
    "userintuition": {
      "url": "https://mcp.userintuition.ai/mcp"
    }
  }
}

ChatGPT: Settings > Connected Apps > Add MCP Server > https://mcp.userintuition.ai

Cursor: Settings > MCP > Add Server > https://mcp.userintuition.ai/mcp

Custom agents (LangChain, CrewAI, AutoGen): Any framework with MCP support can connect using the server URL above.

OAuth prompts on first use. Sign up free for 3 interviews, no credit card.

Why Structured Qual Data Matters for Agents?


Traditional research APIs return either raw transcripts (useless for agents) or survey numbers (useless for decisions). The consumer research API returns structured qualitative data — the specific format agents need:

  • Headline metrics the agent can use in reports and recommendations
  • Driving themes with prevalence scores for prioritization
  • Minority objections that surface edge cases and risks
  • Verbatim quotes for evidence-backed citations
  • Recommendations the agent can act on immediately

This is what we call Human Signal — the missing data type in every agent stack. It’s the bridge between “what does the training data suggest?” and “what do real people actually think?”

For agent developers building autonomous workflows, the structured format matters as much as the content. An agent can parse the JSON response, extract the winning option, check whether the confidence threshold meets a predetermined bar, and either proceed with the decision or escalate to a human reviewer — all without manual intervention. The recommendations array gives the agent concrete next steps it can act on, and the minority objections surface edge cases that would otherwise become blind spots in automated decision-making. Studies return results in 48-72 hours at $20 per interview, making it practical to embed consumer validation as a standard step in any agent pipeline rather than treating it as an occasional, expensive detour.

Pricing


PlanCost
Free tier3 interviews, no credit card
Pay-as-you-goChat $10, Audio $20, Video $40 per interview
Professional ($999/mo)50 interviews included, then standard rates
Typical study (25 people)approximately $200-500 depending on interview type

The MCP connection itself is free. You pay only for the research.

Ready to give your agent a consumer research API? Sign up free, read the MCP docs, or explore the agentic research platform.

Server URL: https://mcp.userintuition.ai/mcp

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

A consumer research API is a programmatic interface that lets software — including AI agents — launch real consumer studies and retrieve structured results without human intervention. User Intuition's API uses the Model Context Protocol (MCP) standard, exposing five tools at mcp.userintuition.ai/mcp that any MCP-compatible agent can call.
AI agents call the consumer research API through MCP tool invocations. The agent calls ask_humans with a research brief (mode, stimuli, sample size), the platform recruits real participants and conducts AI-moderated conversations, then the agent calls get_results to retrieve structured findings with preference splits, themes, and verbatim quotes.
The API returns structured JSON with a headline metric (e.g., '68% preferred Option A'), driving themes ranked by prevalence, minority objections with real participant quotes, data quality indicators, and study metadata. Every finding traces to specific verbatim quotes from real people.
Yes. Set dry_run: true in any ask_humans call to get a cost estimate and timeline without spending credits. A typical 25-participant preference check costs approximately $200-500 depending on interview type (chat, audio, or video).
Any MCP-compatible platform: ChatGPT, Claude, Claude Code, Cursor, and custom agent frameworks (LangChain, CrewAI, AutoGen, OpenAI Agents SDK). The API uses the open Model Context Protocol standard backed by Anthropic, OpenAI, Google, and Microsoft.
Survey APIs distribute fixed questionnaires and return checkbox aggregations. The consumer research API launches AI-moderated conversations that probe 5-7 levels deep, returning qualitative insights — the why behind preferences, not just the what. Each participant has a real conversation, not a form submission.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours