← Insights & Guides · 12 min read

How to Run User Research with User Intuition (Step-by-Step)

By

User research on User Intuition is an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. A product manager, UX designer, or research ops lead can launch a study in an afternoon, then watch transcripts, satisfaction-rated calls, and synthesized reports land inside the Intelligence Hub within 24-48 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, which makes a continuous monthly program viable for any team running 2-week sprints. This tutorial walks through each step with the verbatim labels you will see in the product, the choices that matter most for user research, and the links to the underlying documentation.

For the broader methodology, see the methodology guide. For the headline pricing and the comparison against unmoderated tools, see the User Intuition user research platform page. This piece stays in the workflow lane: what to click, where to click it, and why each setting matters when the cohort you care about is your product’s users.

What is user research, briefly?


User research is the systematic practice of interviewing users, both ones currently using your product and ones in your target market, to understand the gap between what they expect and what they experience. It sits between two failed substitutes: product analytics, which tells you what users did but never why, and agency engagements, which take 4 to 8 weeks and cost $15K to $30K per study. Neither produces continuous intelligence. Both produce reports that arrive after the sprint that needed them has already shipped.

Done well, user research answers four questions every product team needs every quarter. First, what are users trying to accomplish? Second, where does the current experience break down between expectation and reality? Third, what do target users expect that we have not built yet? Fourth, what changed since last release?

The mechanism is depth. A 20-minute conversation with 5 to 7 levels of structured probing surfaces the why behind behavior that an analytics dashboard will never reach. Trust anxiety, cognitive load, mental-model mismatch, persona-specific pain: these are the patterns underneath the abandonment metric. They show up only in conversation.

Modern AI-moderated platforms compress the timeline from weeks to 24-48 hours and the cost from $15K+ per study to $200 floor pricing. Instead of one big quarterly study, product teams can run 10 to 15 user interviews every month, feed every transcript into a searchable Intelligence Hub, and watch product intelligence compound release by release. The workflow below is built around that cadence.

Why doing this on User Intuition is different


Most user research tooling makes you choose between depth and speed. Unmoderated tools (Maze, Lyssna) are fast and shallow. Recruiting platforms (UserTesting, dscout, UserInterviews) are slow and require external moderators. User Intuition delivers both, and the product flow is shaped by that goal.

The AI moderator runs structured laddering conversations and probes 5 to 7 levels deep on what users expected, what they encountered, and where the gap shows up. Every conversation runs the same methodology, so a study with 10 interviews and a study with 300 interviews are directly comparable. There is no moderator drift. Users report higher candor with AI than with vendor-affiliated researchers, and the 98% participant satisfaction rate reflects how the experience lands.

Recruitment is built in. The 4M+ research panel covers 50+ languages and lets you target by demographics, geography, behavior, and segment without building a list. Existing customers flow in from Salesforce or HubSpot via the Contacts integration, or from a unique study link you share inside your product. Always-on collection is also a button: drop the embed widget into a settings page, an onboarding screen, or a feature flag rollout, and feedback flows continuously.

Synthesis runs in the same product. The Reports tab generates an Executive Summary, Top Insights, Detailed Analysis, and Recommendations, all grounded in verbatim user quotes. The Intelligence Hub layers across studies, so onboarding research, retention research, and feature research surface compounding patterns that any single-study report would miss. Pricing at $20/interview keeps the program sustainable at product-team scale, where agency programs typically die after one or two cycles. See the user research platform page for the full positioning.

The 8-step walkthrough


The product flow is the same for every study type. The choices below are tuned for user research specifically.

Step 1: Create Study

Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. For user research, pick the template that maps to your research mode. Customer Onboarding for “What do people expect early on?” NPS and CSAT for “What’s driving satisfaction with the current experience?” Churn Analysis for “Why don’t people come back?” Custom Design for a feature or interaction question that does not fit a pre-built template. Click Save & Continue.

The chosen template pre-loads the conversation flow, screener defaults, and learning goals tuned for that research mode. Starting from the right shape saves 10 minutes versus Custom Design from cold.

User-research-specific tip: if your design decision spans multiple modes (a new onboarding flow that also affects retention), pick the template closest to the primary decision and add secondary questions in Step 3. One study with a tight scope outperforms a sprawling one.

Step 2: Customize Plan

The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines, all in a panel on the right.

For user research, give Charles three things. First, the product or feature in scope: “We are studying the new dashboard we shipped six weeks ago.” Second, the user segment: “Active Pro-plan users, daily-active for at least 30 days.” Third, the design decision the research must inform: “We are deciding whether to keep the new IA or roll back.” Charles will return a plan that asks about user expectations, current usage patterns, friction points, and the gap between what users hoped for and what they got.

User-research-specific tip: tell Charles which user actions you already measure in product analytics so he can probe the why behind them, rather than re-collecting what analytics tells you. Read the Customize Plan docs for the full set of context questions Charles will ask.

Step 3: Review Goals

The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q2 Dashboard UX Research”; rename it to fit your internal naming convention). Below that, every section of the plan, Objective through Interviewer Guidelines, is editable. Highlight, retype, add, delete. Changes save automatically.

This is the step where most teams rebalance the research plan to capture both behavior and expectations. The default plan tends to weight toward behavior questions (“Walk me through how you use the dashboard”), which is what generic UX research collects. Add explicit expectation questions under Key Sub-Questions: “Before you opened the dashboard for the first time, what did you expect to find?” and “What were you hoping to be able to do with this product that you can’t do today?” The gap between expectation and reality is where the highest-leverage user research insight lives, and a generic feature-feedback study will never surface it.

User-research-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Longer plans mean longer interviews mean lower completion rates. Click Save & Continue when the plan reads the way you want.

Step 4: Setup Interview

The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male, American) and Clara (Female, American). Click the play button on each card to preview the voice. Pick the one whose tone fits your user audience; click the card to select it. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for user research because voice transcripts capture think-aloud commentary, hesitation at confusing UI moments, and emotional reaction to friction that chat will not.

Below the format toggles is the screener questions section. Add the screeners that match your study mode. For user research, the highest-leverage screeners are product-usage frequency (“How often have you used {product} in the last 30 days?”), plan tier or feature access (“Are you on the Free, Pro, or Enterprise plan?”), and persona attribute (“Which of these best describes your role?”). Anyone outside the cohort drops out before consuming a credit. For multilingual studies, language and country targeting happens later in the Invite Participants step, not here.

User-research-specific tip: voice format works best for user research because think-aloud is the data, not the words. A user pausing, hesitating, or sighing at a confusing screen tells you more than the eventual sentence they construct.

Step 5: Test Conversation

The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, study introduction, the actual research questions, AI probing on your answers, and a wrap-up. Test conversations do not count against your usage limits, so you can test as many times as you need. After the test, a feedback screen asks “How was the conversation?” with Good or Needs Work options.

For user research, run the test as a user who would give a surface answer (“the dashboard is fine, I use it every day”) and listen for whether the AI probes past that surface. UX research fails when researchers stop at the first answer. The AI should ask “What does fine mean specifically?” or “Walk me through the most recent time you opened it; what were you trying to do?” If the AI accepts the surface answer, the rest of your interviews will too.

User-research-specific tip: if probing is not deep enough, go back to Step 3 and tighten the Interviewer Guidelines with instructions like “Probe at least three levels deep when participants give a one-word answer.” Then re-test.

Step 6: Launch

The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review checklist sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click the Save and Launch button (the one with the rocket icon). Your study goes live immediately. You are redirected to the Study Dashboard.

The detail teams miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step. This is intentional, because most teams want to set up the study config first, review it with design and product partners, and then trigger recruitment on a specific day.

User-research-specific tip: launch on a Wednesday morning, then push recruitment the same day. Mid-week launches see the highest completion rates among professional users. For continuous programs, set a monthly cadence on the same day each month so the Intelligence Hub starts compounding across feature areas.

Step 7: Invite Participants

This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to user research recruitment patterns.

Invite Your Customers lets you import contacts manually, via CSV, or directly from a synced CRM (Salesforce, HubSpot) using the Contacts tab and Segments tab. This is the right path for product-specific research where you want existing users only: pull a segment of users active in the last 30 days, then send. Research Panel opens recruitment from the 4M+ panel across 50+ languages, with country and language selection happening here for multilingual studies. Use this for discovery research with target users you do not have yet (a new persona, a new market, or a feature aimed at a non-customer audience). Share Link generates a unique URL that you can paste into a personalized email, an in-product banner, or an automated workflow. Embed Widget drops always-on user feedback collection into your app or settings page for in-the-flow UX feedback that surfaces friction at the moment it happens.

User-research-specific tip: combine methods. CRM segments for current-product research, panel for discovery research with target users you do not have yet, and the embed widget for ambient continuous feedback inside the app. The Recruiting overview covers all four methods side by side.

Step 8: Review Insights

Once interviews complete (the first ones inside hours, full cohorts within 24-48 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row to expand the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; click Regenerate Report as more land.

The third surface is the Intelligence Hub, accessed from the left navigation. Each Hub session is a workspace where you sync studies, upload supplementary files (PDFs, CSVs), and ask natural-language questions across everything in the room. For a continuous user research program, this is the compounding layer.

User-research-specific tip: create a single Intelligence Hub session called “User Research” at the start of the year, sync every monthly study into it, and keep adding sources. Then ask questions like “How has user language about our onboarding friction evolved across the last three releases?” or “Which segments mention the same usability gap?” The cross-study pattern recognition (month 1 onboarding research plus month 3 retention research plus month 6 expansion research) is the part that single-study reports cannot replicate. The Intelligence Hub docs cover the full output surface.

Mini case study


A mid-market SaaS product team ran the workflow above on 18 active users 30 days after shipping a new analytics dashboard. The team set up the study in a single afternoon: NPS and CSAT template at Step 1, product area and user segment briefed to Charles at Step 2, learning goals tightened at Step 3 to include explicit expectation questions, Audio mode and product-usage screener at Step 4, one test conversation at Step 5, mid-week launch at Step 6.

Recruitment used the CRM Contacts integration to import the active-user segment (signed in within the last 30 days, on the Pro plan). All 18 interviews completed within 56 hours.

The Reports tab and the Intelligence Hub surfaced a finding that contradicted the team’s assumption. Analytics showed 78% of users opened the dashboard in week one, which the team read as adoption success. The user research told a different story: 13 of 18 users opened it once, did not find the chart they expected (a daily-active-user trendline they had asked for in a previous study), and did not return. The team rebuilt the default view around the most-requested chart. Within two releases, weekly returning use improved by 41%.

“We used to wait 6 weeks for research. Now we run studies inside our sprint cycle. The depth of the AI’s laddering surprised me, we uncovered emotional trust barriers that changed our entire onboarding approach.”

Eric O., COO, RudderStack

What gets the best results from user research on User Intuition?


Tips for getting the most out of the workflow, gathered from the patterns that consistently produce ship-decision movement.

Keep the research plan tight. Aim for 4 to 6 main themes with 2 to 4 sub-questions each at Step 3. Plans with 8+ themes produce interviews longer than 25 minutes and completion rates drop sharply. Twelve total questions or fewer keeps completion above 75% on professional-user cohorts.

Always ask the expectation question. Generic UX research collects feature feedback. Great user research surfaces the gap between what users expected and what they encountered. Add an explicit expectation question at Step 3: “Before you used {feature}, what did you expect it to do?”

Pick voice over chat. Voice transcripts capture think-aloud commentary, hesitation at confusing UI moments, and emotional reaction to friction that chat compresses out. Reserve chat for international users where typing is a better fit.

Mix current users and target users when discovery is in scope. Current users tell you how to fix what shipped. Panel users tell you what to ship next. A study with 10 current users and 8 panel users outperforms 18 current users for most product roadmap decisions.

Run continuous, not quarterly. 10 to 15 user interviews per month at $20/interview on the Pro plan compounds into a much sharper picture than one 40-interview quarterly batch. The Intelligence Hub gets meaningfully more useful every month you feed it. See the cost breakdown for SaaS user research programs and the comparison of user research platforms for SaaS in 2026 for the budgeting case behind the monthly cadence.

Sync the Intelligence Hub monthly. Every completed study should land in the same User Research session on a regular cadence. Cross-release pattern recognition is the highest-leverage output User Intuition produces, and it only works if every study lands in the same session.

Route insights inside 7 days. Findings degrade fast. As soon as the report and the Intelligence Hub are populated, route specific findings to specific owners (PM, design, engineering) with a defined SLA. This is the same pattern that connects to agentic research programs for product teams, where the same compounding-intelligence logic applies. The studies that move ship decisions are the ones whose findings reach a decision-maker the same week.

FAQ


The frontmatter FAQ block above answers the most common questions. Canonical answers also surface inline in the workflow above.

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

Setup takes 10 to 20 minutes from Create Study to Launch if you already know the product area, the user segment, and the design decision you want the research to inform. Charles, the AI researcher inside User Intuition, generates the objective, background, and learning goals from a short brief. Most product teams spend more time recruiting users than configuring the study.
No. Product managers, UX designers, and research-curious founders can run a full study without a dedicated researcher. The AI does the moderation, the synthesis, and the report. Your job is to brief Charles on the product area and user segment, edit the learning goals, and review the Intelligence Hub once interviews complete.
Yes. The Invite Participants step has a CRM contacts integration and a Segments tab that pulls participant lists directly from Salesforce, HubSpot, or your customer data platform. You can also paste emails manually, upload a CSV, or share a unique study link with specific user cohorts inside your product.
Voice for almost every user research study. Voice transcripts capture think-aloud commentary, hesitation around confusing flows, and emotional reaction to friction that chat compresses out. Chat is the right call for international users in non-primary languages or for accessibility-sensitive participants. Audio is the recommended default in User Intuition's setup screen.
Eight to twelve interviews surface directional patterns for usability and discovery research. Twenty to thirty interviews is where segment-by-segment comparisons become defensible (power users vs. new users, plan tier A vs. plan tier B). The 4M+ research panel makes it practical to run continuous monthly studies of 10 to 15 interviews instead of one big quarterly batch.
Screener questions sit inside the Setup Interview step and run before the main interview begins. For user research, the highest-leverage screeners are product-usage frequency (active in the last 30 days), plan tier or feature access, and persona attribute. Add a question like 'How often do you use {product feature}?' so participants without context drop out before consuming a credit.
Yes. User Intuition supports 50+ languages, and the country and language selection happens at the Invite Participants step when you build a research panel. For a global product, you can run parallel English, Spanish, French, German, and Japanese cohorts inside a single study and compare verbatim themes side by side in the Intelligence Hub.
On the Professional plan, audio interviews run at $20/interview, which is the marketing headline rate. Studies start at $200, with no monthly minimums and no contract. Twelve interviews per month for a continuous user research program runs roughly $240, well below a single agency interview at $500 to $2,000.
Agencies take 4 to 8 weeks per study at $15K to $30K. The User Intuition workflow lands user interviews and a synthesized report in 24-48 hours. Charles handles methodology, the panel handles recruitment, and the Intelligence Hub handles compounding pattern recognition across studies, so you stop re-learning the same usability lessons every quarter.
Three places inside User Intuition. The Calls tab holds every transcript and audio recording with quality ratings. The Reports tab holds the AI-synthesized Executive Summary, Top Insights, Detailed Analysis, and Recommendations. The Intelligence Hub aggregates studies over time so you can ask cross-study questions like 'How have users described our onboarding friction across the last three releases?'
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours