← Insights & Guides · Updated · 8 min read

MCP Consumer Research Integration Guide for Developers

By

AI agents in 2026 can write code, review PRs, draft emails, and summarize meetings. What they cannot do on their own is ask 500 real consumers what they think. Training data answers that question with plausible text, not consumer evidence. For product teams shipping real features to real users, that gap matters.

The Model Context Protocol (MCP) closes it. MCP is the open standard, now backed by Anthropic, OpenAI, Google, and Microsoft, that lets AI agents discover and invoke external tools through a single interface. User Intuition’s production MCP server at https://mcp.userintuition.ai/mcp exposes real consumer research as a tool call. Any MCP-capable agent — Claude Desktop, Claude Code, Cursor, ChatGPT, custom LangChain agents — can launch studies, retrieve structured results, and query accumulated intelligence without a custom API wrapper.

This guide is for developers integrating the MCP server into an AI workflow. It covers configuration, authentication, the four core tool calls, payload schemas, error handling, rate limits, and the two integration patterns that cover 95% of use cases.

Why MCP Instead of Direct REST API Calls?

Before MCP, integrating consumer research into an AI workflow meant writing a custom wrapper per client. A Cursor integration, a Claude Desktop integration, a ChatGPT plugin, a LangChain tool — each one a separate codebase with its own auth, its own error handling, its own schema.

MCP collapses those four integrations into one. The server exposes a single tool catalog. Every MCP-capable client discovers those tools through the standard initialization handshake and invokes them through the standard tool-call schema. One integration works everywhere.

There’s a second benefit: schema awareness. When an agent calls create_study, the MCP client validates the arguments against the tool schema before the request leaves the client. That catches errors (missing fields, wrong types) locally instead of round-tripping to the server. For agents making tool calls at inference time, local validation is the difference between a recovered error and a burned turn.

For backend batch jobs that don’t need tool discovery or schema negotiation, direct REST calls against the User Intuition API are still a reasonable choice. For anything agent-driven, MCP is the right abstraction.

Prerequisites

Before you start, you need three things:

  1. A User Intuition account. Sign up at app.userintuition.ai/sign-up. The Starter plan gives you 3 free interviews to test the integration end-to-end.
  2. An API key. Generate one at app.userintuition.ai/settings/api. Store it in an environment variable (USER_INTUITION_API_KEY), never commit it.
  3. An MCP-capable client. Claude Desktop (v1.4+), Claude Code, Cursor (v0.42+), or a custom LangChain/CrewAI agent with the MCP adapter installed.

Server-side jobs can use the API key directly. For end-user-facing integrations in Claude Desktop or Cursor, OAuth 2.1 with PKCE is the recommended auth path — it lets each user connect their own User Intuition account rather than sharing a service-level key.

How Do You Configure the MCP Server?

Claude Desktop

Add this to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or the equivalent on your OS:

{
  "mcpServers": {
    "user-intuition": {
      "url": "https://mcp.userintuition.ai/mcp",
      "transport": "http",
      "headers": {
        "X-API-Key": "${USER_INTUITION_API_KEY}"
      }
    }
  }
}

Restart Claude Desktop. The four User Intuition tools appear in the tool tray automatically.

Claude Code

Claude Code reads MCP config from ~/.claude.json or a project-level .mcp.json. The schema is the same as Claude Desktop:

{
  "mcpServers": {
    "user-intuition": {
      "url": "https://mcp.userintuition.ai/mcp",
      "transport": "http",
      "headers": { "X-API-Key": "${USER_INTUITION_API_KEY}" }
    }
  }
}

For the full Claude Code setup including OAuth flow, see our agentic research platform which hosts the live MCP endpoint that Claude Code connects to.

Cursor

Cursor’s MCP config lives at ~/.cursor/mcp.json. Same schema as Claude. After config, the tools appear in the Cursor composer as invokable actions.

Custom Agents (LangChain, CrewAI, AutoGen)

LangChain:

from langchain_mcp_adapters.client import MCPClient

client = MCPClient(
    url="https://mcp.userintuition.ai/mcp",
    headers={"X-API-Key": os.environ["USER_INTUITION_API_KEY"]},
)
tools = await client.get_tools()  # returns the 4 research tools as LangChain Tools

CrewAI uses a similar adapter. Any framework with an MCP client can connect the same way.

The Four Core Tool Calls

1. create_study

Launches a new consumer research study. Payload:

{
  "name": "SSO add-on pricing test",
  "mode": "preference_check",
  "stimulus": "Would you pay $15/user/month for SSO as an add-on to your current plan?",
  "audience": {
    "role": "IT admin or security lead",
    "company_size": "100-1000 employees",
    "geos": ["US", "UK", "CA"]
  },
  "sample_size": 20,
  "languages": ["en"],
  "webhook_url": "https://your-app.com/mcp-webhooks/studies"
}

Returns a study_id and an estimated completion time.

2. get_study_status

Poll for completion. Accepts a study_id, returns one of pending, recruiting, in_progress, analyzing, complete, failed. Include the status response’s eta_seconds field when deciding whether to keep polling or switch to a webhook.

3. get_study_results

Retrieves the structured findings once the study is complete. Response includes top themes with frequency and example verbatims, segment breakdowns, confidence scores, recruitment metadata, and a direct link to the full dashboard. Agents parse this directly into PRDs, Slack updates, or downstream tool calls.

4. query_intelligence

Searches the Customer Intelligence Hub for existing findings before launching new research. This is the read path — an agent can check “have we asked enterprise admins about SSO pricing in the last 90 days?” and skip redundant studies if the answer is yes.

Every study launched through create_study automatically feeds query_intelligence, so the system compounds over time. This is the same pattern described in agentic research for product teams.

Integration Pattern 1: Synchronous Polling

Best for assumption checks that complete inside a chat session. Pattern:

  1. Agent calls create_study with sample_size: 10-20
  2. Agent calls get_study_status every 30-60 seconds
  3. When status returns complete, agent calls get_study_results
  4. Agent incorporates findings into its next response

This works inside a single turn for the user. The 2-3 hour wall-clock latency is handled by the client — Claude Desktop, Cursor, and Claude Code all support long-running tool calls with periodic status updates.

Integration Pattern 2: Asynchronous Webhooks

Best for studies with 50+ participants, multi-segment comparison, or multilingual recruitment where completion takes 48-72 hours. Pattern:

  1. Agent calls create_study with a webhook_url
  2. Agent returns to the user: “I’ve launched the study. I’ll notify you when it completes.”
  3. Your backend receives a POST at webhook_url when the study finishes
  4. Your backend pings the user (Slack, email, Linear comment) with the results

This pattern is mandatory for deeper studies — no one wants an agent session hanging for 72 hours. For a deeper look at when each pattern is appropriate, see the agentic research vs traditional qual decision matrix.

Error Handling

The MCP server returns standard HTTP status codes. Key cases:

  • 401 Unauthorized: API key missing or invalid. Regenerate at the settings page.
  • 402 Payment Required: Account out of credits. Add payment or upgrade plan.
  • 429 Too Many Requests: Rate limit exceeded. Retry after the Retry-After header.
  • 422 Unprocessable Entity: Schema validation failure. Response body identifies the offending field.
  • 5xx: Server-side issue. MCP clients should retry with exponential backoff.

Agents should treat 402 and 422 as permanent (don’t retry without user intervention) and 429/5xx as transient (retry with backoff).

Rate Limits and Quotas

Default limits per API key:

  • 60 tool calls per minute
  • 10 concurrent active studies
  • 10,000 tool calls per day

Enterprise accounts have higher limits. Every response includes X-RateLimit-Remaining and X-RateLimit-Reset headers so agents can self-throttle. For high-volume batch workflows, contact sales@userintuition.ai for a dedicated quota.

OAuth 2.1 Flow for End-User Integrations

API keys are fine for server-side agents, but integrations shipped inside a user-facing client (Claude Desktop, Cursor, a Slack bot your customers install) need per-user auth. The MCP server supports OAuth 2.1 with PKCE for this case.

The flow mirrors standard OAuth 2.1 authorization code + PKCE:

  1. Client generates a PKCE code verifier and code challenge
  2. Client redirects user to https://app.userintuition.ai/oauth/authorize with client_id, redirect_uri, code_challenge, code_challenge_method=S256, and requested scope
  3. User authenticates with User Intuition and approves the requested scopes
  4. User Intuition redirects back to redirect_uri with an authorization code
  5. Client exchanges the code + PKCE verifier at https://app.userintuition.ai/oauth/token for an access token and refresh token
  6. Client attaches the access token as Authorization: Bearer on every MCP request

Scopes:

  • studies:read — query intelligence hub, read study status and results
  • studies:write — create new studies and register webhooks
  • intelligence:read — full access to the Customer Intelligence Hub

Refresh tokens last 90 days, access tokens last 1 hour. Standard refresh flow applies.

For Claude Desktop specifically, the MCP spec’s OAuth profile handles this automatically — you register your server as a dynamic client and Claude’s UI walks the user through authorization on first use.

Webhook Signing and Verification

When a study completes and the MCP server posts to your webhook_url, the request is signed with HMAC-SHA256 over the raw body. The signature is in the X-User-Intuition-Signature header formatted as sha256=<hex>.

Verification (Node):

const crypto = require('crypto');
function verify(rawBody, signature, secret) {
  const expected = 'sha256=' + crypto
    .createHmac('sha256', secret)
    .update(rawBody)
    .digest('hex');
  return crypto.timingSafeEqual(
    Buffer.from(expected),
    Buffer.from(signature)
  );
}

Your webhook secret is shown once when you register the webhook endpoint at app.userintuition.ai/settings/webhooks. Rotate it anytime.

Testing the Integration Locally

Three tips that save real time:

Use the sandbox panel. Set sandbox: true in the create_study payload. The server routes the study to a synthetic test panel that completes in 5-10 minutes instead of 2-3 hours. Results use the same schema as production, so your agent code is identical. Sandbox studies don’t consume credits.

Replay webhooks locally. The settings page has a “Replay last webhook” button that re-fires the most recent completion event at your webhook URL. Combine with ngrok to test the full async pattern against a local dev server.

Inspect MCP traffic. Claude Desktop and Claude Code both log MCP tool calls to ~/.claude/logs/mcp/. When a tool call fails, the full request and response are there, which is usually faster than adding client-side logging.

What to Build First

The fastest path to a working integration:

  1. Configure Claude Desktop with the MCP server config above
  2. Open a new chat and ask: “Use User Intuition to check whether 15 SMB product managers would pay for an SSO add-on”
  3. Watch Claude call create_study, poll get_study_status, and return results in a single session
  4. Extend to your production agent (Cursor, custom LangChain, etc.) using the same config pattern

Most developers ship their first working integration in under an hour. The full technical reference, including OAuth flow, webhook signing, and advanced targeting options, lives at docs.userintuition.ai/integrations/mcp-server.

For the product surface that this MCP server connects to, see our agentic research platform. For a non-technical overview of what this enables, see how to connect AI agents to consumer research via MCP.

Ready to ship? Generate an API key at app.userintuition.ai/settings/api and start building.

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

MCP (Model Context Protocol) is Anthropic's open standard for connecting AI agents to external tools. It matters for consumer research because it lets any MCP-capable agent, including Claude, Cursor, and ChatGPT, launch real studies with real people through a single standardized interface, without custom API wrappers per client.
Add a server entry to your claude_desktop_config.json pointing to https://mcp.userintuition.ai/mcp with your API key as a header. Restart Claude Desktop, and the four research tools (create_study, get_study_status, get_study_results, query_intelligence) appear automatically. Full config examples are at docs.userintuition.ai/integrations/mcp-server.
Two options: API-key authentication for server-to-server integrations (header: X-API-Key) and OAuth 2.1 with PKCE for client applications where users authenticate with their User Intuition accounts. OAuth is the recommended path for Claude Desktop and Cursor; API keys are better for backend agents and automation.
Four core tools: create_study (launch a new consumer research study), get_study_status (poll for completion), get_study_results (retrieve structured findings), and query_intelligence (search existing studies in the Customer Intelligence Hub). Each tool has a JSON schema that agents discover automatically via the MCP initialization handshake.
Yes. LangChain supports MCP via the langchain-mcp adapter, and CrewAI can invoke MCP servers through its tools interface. Any framework that speaks MCP can connect to the User Intuition server. The integration is identical to Claude or Cursor, only the client differs.
Assumption checks with 10-20 participants typically complete in 2-3 hours. Deeper studies with 50+ participants, open-ended probing, or multi-segment comparison take 48-72 hours. Agents can choose polling for short studies or webhook callbacks for longer ones. Both patterns are documented in this guide.
Agents have two options: synchronous polling (call get_study_status every 30-60 seconds, useful inside a chat session for assumption checks), or asynchronous webhooks (register a callback URL at study creation, receive a POST when the study completes). For long-running studies, webhooks are strongly recommended to avoid blocking the agent session.
A structured JSON payload containing: top themes with frequency and example verbatims, segment-level breakdowns, confidence scores, recruitment metadata (panel size, completion rate, languages), and direct links to the full study dashboard. Agents can parse this directly into PRDs, Slack messages, or downstream tool calls.
Default limits are 60 tool calls per minute per API key and 10 concurrent studies per account. Enterprise accounts have higher limits. Burst headers (X-RateLimit-Remaining) are returned on every response so agents can self-throttle. Exceeded limits return a 429 with a Retry-After header.
The MCP server wraps the same underlying REST API but adds tool discovery (agents learn available operations without docs), schema validation at the client, and native session handling for long-running studies. For agent workflows, MCP is substantially less code than custom REST wrappers. For server-side batch jobs, direct REST is still a reasonable choice.
Three things: (1) an API key from https://app.userintuition.ai/settings/api, (2) the server URL https://mcp.userintuition.ai/mcp added to your MCP-capable client's config, (3) a sample tool call to create_study with a small audience. Most developers ship their first working integration in under an hour using the code samples at docs.userintuition.ai/integrations/mcp-server.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours