Every AI agent stack has APIs for databases, search, code execution, and document processing. But when the agent needs to know what real people think about your headline, your pricing, or your claim — it has nothing to call.
User Intuition’s consumer research API changes that. Through the Model Context Protocol (MCP), any AI agent can programmatically launch real consumer studies and receive structured qualitative results.
This guide shows the actual API calls and responses, so you can see exactly what your agent sends and what it gets back.
The API: Five MCP Tools
The consumer research API exposes five tools through the MCP server at mcp.userintuition.ai/mcp:
| Tool | Purpose | When to Use |
|---|---|---|
ask_humans | Launch a new study | Agent needs fresh consumer signal |
get_results | Retrieve study results | Check on a running or completed study |
list_studies | Query past research | Before launching, check what you already know |
edit_study | Modify an active study | Change parameters before completion |
cancel_study | Cancel an in-progress study | Abort if no longer needed |
API Call: Launching a Preference Check
Here’s what your agent sends when it needs to know which headline real people prefer:
{
"tool": "ask_humans",
"parameters": {
"mode": "preference_check",
"stimuli": [
"Ship features your customers actually want",
"Stop guessing what customers think",
"Customer research in hours, not months"
],
"sample_size": 25,
"context": "Landing page headline for a B2B SaaS research platform targeting product managers"
}
}
What happens next: The platform recruits 25 participants matching the context profile from a vetted 4M+ global panel. Each participant enters an AI-moderated conversation where they see all three options, select their preference, and explain their reasoning through 5-7 levels of probing depth. The AI moderator adapts follow-up questions based on each participant’s responses.
2-3 hours later, the agent calls get_results:
{
"tool": "get_results",
"parameters": {
"study_id": "study_abc123"
}
}
API Response: What Rich Qual Data Looks Like
This is what the agent receives — structured qualitative data that no survey API can produce:
{
"study_id": "study_abc123",
"status": "completed",
"mode": "preference_check",
"participants": 25,
"completion_time": "2h 47m",
"headline_metric": {
"winner": "Customer research in hours, not months",
"distribution": {
"Customer research in hours, not months": 0.48,
"Ship features your customers actually want": 0.32,
"Stop guessing what customers think": 0.20
}
},
"driving_themes": [
{
"theme": "Specificity of time contrast",
"prevalence": 0.71,
"summary": "The 'hours, not months' framing created a believable, concrete contrast that made the value proposition feel tangible rather than aspirational.",
"evidence": [
{
"participant": "P-007",
"quote": "Hours not months — that's a real promise. The other two could be any SaaS company."
},
{
"participant": "P-019",
"quote": "I've been burned by research that takes forever. If you can actually do it in hours, that's the only thing I need to know."
}
]
},
{
"theme": "Outcome focus resonates but feels generic",
"prevalence": 0.54,
"summary": "Option 1's 'features customers actually want' spoke to a real pain but the word 'actually' felt slightly condescending to several participants.",
"evidence": [
{
"participant": "P-003",
"quote": "I like the idea but 'actually want' implies I don't already know my customers. That's a bit insulting."
},
{
"participant": "P-014",
"quote": "This one gets the job-to-be-done right — I want to build what matters. But 'actually' makes it sound like I've been guessing."
}
]
}
],
"minority_objections": [
{
"theme": "Emotional resonance with pain of guessing",
"prevalence": 0.20,
"summary": "A significant minority connected deeply with Option 2's direct acknowledgment of the guessing problem.",
"evidence": [
{
"participant": "P-011",
"quote": "This one gets it. I've wasted months building the wrong thing because we were guessing. 'Stop guessing' is exactly what I want to hear."
}
]
}
],
"recommendations": [
"Lead with Option 3's time contrast — 'hours, not months' is the strongest differentiator",
"Consider softening 'hours' to 'days' if turnaround isn't consistently sub-24h",
"Test a hybrid: 'Stop guessing. Get customer research in hours, not months.'"
],
"data_quality": {
"avg_conversation_depth": 5.3,
"avg_conversation_length_minutes": 12.4,
"engagement_score": 0.91,
"fraud_flags": 0
}
}
This is what makes the consumer research API different from a survey API. A survey would return {option_a: 12, option_b: 8, option_c: 5}. The consumer research API returns why people prefer what they prefer, the specific objections, the emotional triggers, and actionable recommendations — all traced to real verbatim quotes from real participants.
API Call: Testing a Claim
When your agent needs to know if a marketing claim is believable:
{
"tool": "ask_humans",
"parameters": {
"mode": "claim_reaction",
"stimuli": [
"Trusted by 10,000+ teams worldwide"
],
"sample_size": 30,
"context": "Homepage social proof claim for a B2B SaaS platform"
}
}
API Response: Claim Reaction Results
{
"study_id": "study_def456",
"status": "completed",
"mode": "claim_reaction",
"participants": 30,
"headline_metric": {
"agreement_score": 4.8,
"scale": "1-7",
"interpretation": "Moderate credibility — believed by a majority but with notable skepticism"
},
"driving_themes": [
{
"theme": "Number specificity builds credibility",
"prevalence": 0.42,
"summary": "10,000 felt specific enough to be real. Round numbers trigger more skepticism.",
"evidence": [
{
"participant": "P-022",
"quote": "10,000 is specific enough that I believe it. If they said 'thousands' I'd roll my eyes."
}
]
},
{
"theme": "'Worldwide' triggers skepticism",
"prevalence": 0.31,
"summary": "The word 'worldwide' felt like marketing overreach. Participants wanted geographic specificity.",
"evidence": [
{
"participant": "P-008",
"quote": "Every startup says worldwide. Show me where. '40 countries' or 'across North America and Europe' — that I'd believe."
}
]
}
],
"recommendations": [
"Keep '10,000+ teams' — the number carries credibility",
"Replace 'worldwide' with specific geography: '10,000+ teams across 40 countries'",
"Alternative: '10,000+ product teams' — adding the qualifier makes it more believable and targeted"
]
}
The agent now knows: the number works, the word “worldwide” doesn’t, and here’s exactly what to change. That’s intelligence a survey checkbox can’t produce.
API Call: Cost Estimation (Dry Run)
Before committing credits, agents should always estimate costs:
{
"tool": "ask_humans",
"parameters": {
"mode": "preference_check",
"stimuli": ["Option A copy", "Option B copy", "Option C copy"],
"sample_size": 50,
"dry_run": true
}
}
{
"dry_run": true,
"estimated_cost": "$400",
"estimated_time": "2-3 hours",
"credits_required": 50,
"credits_available": 120,
"note": "No credits spent. Call again without dry_run to launch."
}
API Call: Querying Past Research
Before launching a new study, check what you already know:
{
"tool": "list_studies",
"parameters": {
"status": "completed",
"limit": 10
}
}
The response lists completed studies with summaries, so the agent can reference existing findings before spending credits on duplicate research. Every study feeds the Customer Intelligence Hub — compounding intelligence that makes each new study smarter.
Setup: Connecting Your Agent to the API
One-time configuration. The consumer research API uses the MCP standard — no custom SDK, no API keys in your code:
Claude Desktop / Claude Code:
{
"mcpServers": {
"userintuition": {
"url": "https://mcp.userintuition.ai/mcp"
}
}
}
ChatGPT: Settings > Connected Apps > Add MCP Server > https://mcp.userintuition.ai
Cursor: Settings > MCP > Add Server > https://mcp.userintuition.ai/mcp
Custom agents (LangChain, CrewAI, AutoGen): Any framework with MCP support can connect using the server URL above.
OAuth prompts on first use. Sign up free for 3 interviews, no credit card.
Why Structured Qual Data Matters for Agents
Traditional research APIs return either raw transcripts (useless for agents) or survey numbers (useless for decisions). The consumer research API returns structured qualitative data — the specific format agents need:
- Headline metrics the agent can use in reports and recommendations
- Driving themes with prevalence scores for prioritization
- Minority objections that surface edge cases and risks
- Verbatim quotes for evidence-backed citations
- Recommendations the agent can act on immediately
This is what we call Human Signal — the missing data type in every agent stack. It’s the bridge between “what does the training data suggest?” and “what do real people actually think?”
Pricing
| Plan | Cost |
|---|---|
| Free tier | 3 interviews, no credit card |
| Pay-as-you-go | Chat $10, Audio $20, Video $40 per interview |
| Professional ($999/mo) | 50 interviews included, then standard rates |
| Typical study (25 people) | ~$200-500 depending on interview type |
The MCP connection itself is free. You pay only for the research.
Ready to give your agent a consumer research API? Sign up free, read the MCP docs, or explore the agentic research platform.
Server URL: https://mcp.userintuition.ai/mcp