AI agents in 2026 can write code, review PRs, draft emails, and summarize meetings. What they cannot do on their own is ask 500 real consumers what they think. Training data answers that question with plausible text, not consumer evidence. For product teams shipping real features to real users, that gap matters.
The Model Context Protocol (MCP) closes it. MCP is the open standard, now backed by Anthropic, OpenAI, Google, and Microsoft, that lets AI agents discover and invoke external tools through a single interface. User Intuition’s production MCP server at https://mcp.userintuition.ai/mcp exposes real consumer research as a tool call. Any MCP-capable agent — Claude Desktop, Claude Code, Cursor, ChatGPT, custom LangChain agents — can launch studies, retrieve structured results, and query accumulated intelligence without a custom API wrapper.
This guide is for developers integrating the MCP server into an AI workflow. It covers configuration, authentication, the four core tool calls, payload schemas, error handling, rate limits, and the two integration patterns that cover 95% of use cases.
Why MCP Instead of Direct REST API Calls?
Before MCP, integrating consumer research into an AI workflow meant writing a custom wrapper per client. A Cursor integration, a Claude Desktop integration, a ChatGPT plugin, a LangChain tool — each one a separate codebase with its own auth, its own error handling, its own schema.
MCP collapses those four integrations into one. The server exposes a single tool catalog. Every MCP-capable client discovers those tools through the standard initialization handshake and invokes them through the standard tool-call schema. One integration works everywhere.
There’s a second benefit: schema awareness. When an agent calls create_study, the MCP client validates the arguments against the tool schema before the request leaves the client. That catches errors (missing fields, wrong types) locally instead of round-tripping to the server. For agents making tool calls at inference time, local validation is the difference between a recovered error and a burned turn.
For backend batch jobs that don’t need tool discovery or schema negotiation, direct REST calls against the User Intuition API are still a reasonable choice. For anything agent-driven, MCP is the right abstraction.
Prerequisites
Before you start, you need three things:
- A User Intuition account. Sign up at app.userintuition.ai/sign-up. The Starter plan gives you 3 free interviews to test the integration end-to-end.
- An API key. Generate one at app.userintuition.ai/settings/api. Store it in an environment variable (
USER_INTUITION_API_KEY), never commit it. - An MCP-capable client. Claude Desktop (v1.4+), Claude Code, Cursor (v0.42+), or a custom LangChain/CrewAI agent with the MCP adapter installed.
Server-side jobs can use the API key directly. For end-user-facing integrations in Claude Desktop or Cursor, OAuth 2.1 with PKCE is the recommended auth path — it lets each user connect their own User Intuition account rather than sharing a service-level key.
How Do You Configure the MCP Server?
Claude Desktop
Add this to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or the equivalent on your OS:
{
"mcpServers": {
"user-intuition": {
"url": "https://mcp.userintuition.ai/mcp",
"transport": "http",
"headers": {
"X-API-Key": "${USER_INTUITION_API_KEY}"
}
}
}
}
Restart Claude Desktop. The four User Intuition tools appear in the tool tray automatically.
Claude Code
Claude Code reads MCP config from ~/.claude.json or a project-level .mcp.json. The schema is the same as Claude Desktop:
{
"mcpServers": {
"user-intuition": {
"url": "https://mcp.userintuition.ai/mcp",
"transport": "http",
"headers": { "X-API-Key": "${USER_INTUITION_API_KEY}" }
}
}
}
For the full Claude Code setup including OAuth flow, see our agentic research platform which hosts the live MCP endpoint that Claude Code connects to.
Cursor
Cursor’s MCP config lives at ~/.cursor/mcp.json. Same schema as Claude. After config, the tools appear in the Cursor composer as invokable actions.
Custom Agents (LangChain, CrewAI, AutoGen)
LangChain:
from langchain_mcp_adapters.client import MCPClient
client = MCPClient(
url="https://mcp.userintuition.ai/mcp",
headers={"X-API-Key": os.environ["USER_INTUITION_API_KEY"]},
)
tools = await client.get_tools() # returns the 4 research tools as LangChain Tools
CrewAI uses a similar adapter. Any framework with an MCP client can connect the same way.
The Four Core Tool Calls
1. create_study
Launches a new consumer research study. Payload:
{
"name": "SSO add-on pricing test",
"mode": "preference_check",
"stimulus": "Would you pay $15/user/month for SSO as an add-on to your current plan?",
"audience": {
"role": "IT admin or security lead",
"company_size": "100-1000 employees",
"geos": ["US", "UK", "CA"]
},
"sample_size": 20,
"languages": ["en"],
"webhook_url": "https://your-app.com/mcp-webhooks/studies"
}
Returns a study_id and an estimated completion time.
2. get_study_status
Poll for completion. Accepts a study_id, returns one of pending, recruiting, in_progress, analyzing, complete, failed. Include the status response’s eta_seconds field when deciding whether to keep polling or switch to a webhook.
3. get_study_results
Retrieves the structured findings once the study is complete. Response includes top themes with frequency and example verbatims, segment breakdowns, confidence scores, recruitment metadata, and a direct link to the full dashboard. Agents parse this directly into PRDs, Slack updates, or downstream tool calls.
4. query_intelligence
Searches the Customer Intelligence Hub for existing findings before launching new research. This is the read path — an agent can check “have we asked enterprise admins about SSO pricing in the last 90 days?” and skip redundant studies if the answer is yes.
Every study launched through create_study automatically feeds query_intelligence, so the system compounds over time. This is the same pattern described in agentic research for product teams.
Integration Pattern 1: Synchronous Polling
Best for assumption checks that complete inside a chat session. Pattern:
- Agent calls
create_studywithsample_size: 10-20 - Agent calls
get_study_statusevery 30-60 seconds - When status returns
complete, agent callsget_study_results - Agent incorporates findings into its next response
This works inside a single turn for the user. The 2-3 hour wall-clock latency is handled by the client — Claude Desktop, Cursor, and Claude Code all support long-running tool calls with periodic status updates.
Integration Pattern 2: Asynchronous Webhooks
Best for studies with 50+ participants, multi-segment comparison, or multilingual recruitment where completion takes 48-72 hours. Pattern:
- Agent calls
create_studywith awebhook_url - Agent returns to the user: “I’ve launched the study. I’ll notify you when it completes.”
- Your backend receives a POST at
webhook_urlwhen the study finishes - Your backend pings the user (Slack, email, Linear comment) with the results
This pattern is mandatory for deeper studies — no one wants an agent session hanging for 72 hours. For a deeper look at when each pattern is appropriate, see the agentic research vs traditional qual decision matrix.
Error Handling
The MCP server returns standard HTTP status codes. Key cases:
- 401 Unauthorized: API key missing or invalid. Regenerate at the settings page.
- 402 Payment Required: Account out of credits. Add payment or upgrade plan.
- 429 Too Many Requests: Rate limit exceeded. Retry after the
Retry-Afterheader. - 422 Unprocessable Entity: Schema validation failure. Response body identifies the offending field.
- 5xx: Server-side issue. MCP clients should retry with exponential backoff.
Agents should treat 402 and 422 as permanent (don’t retry without user intervention) and 429/5xx as transient (retry with backoff).
Rate Limits and Quotas
Default limits per API key:
- 60 tool calls per minute
- 10 concurrent active studies
- 10,000 tool calls per day
Enterprise accounts have higher limits. Every response includes X-RateLimit-Remaining and X-RateLimit-Reset headers so agents can self-throttle. For high-volume batch workflows, contact sales@userintuition.ai for a dedicated quota.
OAuth 2.1 Flow for End-User Integrations
API keys are fine for server-side agents, but integrations shipped inside a user-facing client (Claude Desktop, Cursor, a Slack bot your customers install) need per-user auth. The MCP server supports OAuth 2.1 with PKCE for this case.
The flow mirrors standard OAuth 2.1 authorization code + PKCE:
- Client generates a PKCE code verifier and code challenge
- Client redirects user to
https://app.userintuition.ai/oauth/authorizewithclient_id,redirect_uri,code_challenge,code_challenge_method=S256, and requestedscope - User authenticates with User Intuition and approves the requested scopes
- User Intuition redirects back to
redirect_uriwith an authorization code - Client exchanges the code + PKCE verifier at
https://app.userintuition.ai/oauth/tokenfor an access token and refresh token - Client attaches the access token as
Authorization: Beareron every MCP request
Scopes:
studies:read— query intelligence hub, read study status and resultsstudies:write— create new studies and register webhooksintelligence:read— full access to the Customer Intelligence Hub
Refresh tokens last 90 days, access tokens last 1 hour. Standard refresh flow applies.
For Claude Desktop specifically, the MCP spec’s OAuth profile handles this automatically — you register your server as a dynamic client and Claude’s UI walks the user through authorization on first use.
Webhook Signing and Verification
When a study completes and the MCP server posts to your webhook_url, the request is signed with HMAC-SHA256 over the raw body. The signature is in the X-User-Intuition-Signature header formatted as sha256=<hex>.
Verification (Node):
const crypto = require('crypto');
function verify(rawBody, signature, secret) {
const expected = 'sha256=' + crypto
.createHmac('sha256', secret)
.update(rawBody)
.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(expected),
Buffer.from(signature)
);
}
Your webhook secret is shown once when you register the webhook endpoint at app.userintuition.ai/settings/webhooks. Rotate it anytime.
Testing the Integration Locally
Three tips that save real time:
Use the sandbox panel. Set sandbox: true in the create_study payload. The server routes the study to a synthetic test panel that completes in 5-10 minutes instead of 2-3 hours. Results use the same schema as production, so your agent code is identical. Sandbox studies don’t consume credits.
Replay webhooks locally. The settings page has a “Replay last webhook” button that re-fires the most recent completion event at your webhook URL. Combine with ngrok to test the full async pattern against a local dev server.
Inspect MCP traffic. Claude Desktop and Claude Code both log MCP tool calls to ~/.claude/logs/mcp/. When a tool call fails, the full request and response are there, which is usually faster than adding client-side logging.
What to Build First
The fastest path to a working integration:
- Configure Claude Desktop with the MCP server config above
- Open a new chat and ask: “Use User Intuition to check whether 15 SMB product managers would pay for an SSO add-on”
- Watch Claude call
create_study, pollget_study_status, and return results in a single session - Extend to your production agent (Cursor, custom LangChain, etc.) using the same config pattern
Most developers ship their first working integration in under an hour. The full technical reference, including OAuth flow, webhook signing, and advanced targeting options, lives at docs.userintuition.ai/integrations/mcp-server.
For the product surface that this MCP server connects to, see our agentic research platform. For a non-technical overview of what this enables, see how to connect AI agents to consumer research via MCP.
Ready to ship? Generate an API key at app.userintuition.ai/settings/api and start building.