Market research on User Intuition is an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. The product is built so a brand manager, product marketer, founder doing customer discovery, or consultant running a buyer-insight engagement can launch the qualitative depth-interview portion of their program in an afternoon, then watch transcripts and synthesized reports show up inside the Intelligence Hub within 48-72 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, which makes a continuous quarterly program financially viable even without an agency budget. This tutorial walks through each step with the verbatim labels you will see in the product and the choices that matter most for market research specifically.
For the broader methodology (program design, sample sizing, segment-level analysis), see the methodology guide. For headline pricing and the comparison against GfK, NielsenIQ, Mintel, and YouGov, see User Intuition’s market intelligence software. This piece is the workflow lane.
What is market research, briefly?
Market research is the systematic practice of studying the people who shape a market, including buyers, non-buyers, prospects, and category influencers, to inform a category, brand, product, pricing, or go-to-market decision. The practice spans two pillars. The quantitative pillar measures size, share, segmentation, and tracking. The qualitative pillar surfaces motivation, language, perception, and the underlying decision logic that quantitative measurement cannot reach. Most programs use both.
Done well, qualitative market research answers four questions every commercial team needs answered every quarter. First, what does this audience want, in their own words? Second, how does the audience rank the alternatives we compete against? Third, where are perception and reality misaligned in ways we can fix? Fourth, what changed since last quarter? Research that is not anchored to a decision tends to become shelfware no matter how methodologically elegant it is.
The mechanism that makes qualitative market research work is depth. A 30-minute conversation with a target buyer, with 5 to 7 levels of structured probing, surfaces motivation drivers that a 5-minute survey will never reach. Category framing, language preference, alternative consideration logic, perceived risk, and the story buyers already tell themselves about your category, these patterns show up only in conversation. AI moderation has compressed the cost and timeline so the depth pillar can run continuously instead of annually.
Why doing this on User Intuition is different
User Intuition handles the qualitative depth-interview portion of a market research program: buyer interviews, motivation laddering, concept reactions, brand perception probes. We do not run quantitative trackers, MMM, panel-only services, or dashboard-style market sizing. If your program needs the quant pillar, you will pair User Intuition with a different tool. The walkthrough below is for the qualitative depth-interview workflow.
Most qualitative tooling makes you choose between depth and speed. DIY survey tools are fast and shallow. Traditional agencies are deep and slow. User Intuition is built to deliver both inside the qualitative depth lane.
The AI moderator runs 30-minute structured laddering conversations and probes 5 to 7 levels deep on motivation, category framing, and the moments that shape category preference. Every conversation runs the same methodology, so a study with 15 interviews and a study with 300 interviews are directly comparable. There is no moderator bias, because the moderator is a consistent system rather than a rotating contractor bench. Buyers report higher candor with AI than with brand-affiliated researchers, and the 98% participant satisfaction rate reflects how the experience lands.
Recruitment is built in. The 4M+ research panel covers 50+ languages and lets you target by demographics, geography, and category behavior without building a list. Existing customer cohorts flow in from Salesforce or HubSpot via the Contacts integration, or from a unique study link you share. Always-on collection is also a button: drop the embed widget into your post-purchase email or category landing page.
Synthesis runs in the same product. Once interviews complete, the Reports tab generates an Executive Summary, Top Insights, Detailed Analysis, and Recommendations, grounded in verbatim quotes. The Intelligence Hub layers across studies, so quarter one and quarter two patterns surface compounding insights that any single-study report would miss. Pricing at $20/interview makes the qualitative pillar sustainable as a continuous program. See User Intuition’s market intelligence software for the full positioning, and the agentic market research guide for the underlying design choices.
The 8-step walkthrough
The product flow is the same for every study type. The choices below are tuned for qualitative market research specifically.
Step 1: Create Study
Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. For most market research questions, the right card is Brand Health (when the question is about category dynamics, perception, or competitive positioning) or Custom Design (when you want to start from a custom research question rather than a template). The selected card highlights with a dark background. Click Save & Continue.
Pick by question shape. “What do people expect from this category early on?” suits Custom Design. “What drives satisfaction with category alternatives?” suits Brand Health. “Why do prospects pick competitors over us in this segment?” can run on Win/Loss Analysis if the cohort is buyers who recently evaluated. Concept tests, motivation studies, and message validation are Custom Design.
Market-research-specific tip: do not stretch a template that does not fit. The 10 minutes you save by using a template that almost fits costs you 30 minutes later rewriting Charles’s output. See the Choose Your Study Type docs.
Step 2: Customize Plan
The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines, all in a panel on the right.
For market research, give Charles three things in your opening message. First, the business question: “Should we enter the protein-bar category?” or “Why are SMB prospects choosing competitor X over us?” or “How do millennial parents talk about kids’ nutrition right now?” Second, the audience: “Mid-market B2B SaaS buyers, $50M to $250M revenue, North America” or “US adults 25 to 44, primary grocery shoppers, household income $75K+.” Third, the decision the research must inform: “Yes/no on a Q4 category launch” or “Whether to reposition the SMB ICP for next year.” Charles will return a research plan that maps the Key Sub-Questions to each branch of the decision.
Market-research-specific tip: anchor every brief to a decision. Market research that is not tied to a decision becomes shelfware no matter how sharp the methodology. If you cannot name the decision, that is the signal to slow down before launching the study. Read the Customize Plan docs for the full set of context, scope, and objective questions Charles will ask.
Step 3: Review Goals
The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q3 Category Entry, Protein Bars”; rename it to fit your internal naming convention). Below that, every section of the plan, Objective through Interviewer Guidelines, is editable. Highlight, retype, add, delete. Changes save automatically.
This is the step where most teams tighten the learning goals to the decision. The default plan tends to ask broad category questions; market research that moves a decision needs explicit probes on the specific dimensions of the decision. If you are deciding on a category entry, add a Key Sub-Question like “What does this audience already do to solve the underlying need, and how satisfied are they with that solution?” If you are deciding on a repositioning, add “How does this audience describe the alternatives in the category, and which alternative do they default to when they cannot think of a brand name?”
Market-research-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Longer plans mean longer interviews mean lower completion rates. The Review Study docs have the full editing toolbar reference. Click Save & Continue when the plan reads the way you want.
Step 4: Setup Interview
The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male, American) and Clara (Female, American). Click the play button on each card to preview the voice. Pick the one whose tone fits your audience; click the card to select it. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for market research because voice transcripts capture motivation language, hesitation, and the emotional emphasis that survey instruments compress out.
Below the format toggles is the screener questions section. For market research, the highest-leverage screeners are audience qualifying ones: industry, role, behavior, and demographic. For a category-entry study, screen for category usage frequency. For a B2B buying-journey study, screen for involvement in the relevant evaluation. For a brand perception study, screen for awareness of the category, not your brand specifically. Anyone who fails screens drops out before consuming a credit.
Market-research-specific tip: pick Elliot for audiences that skew male and Clara for audiences that skew female. The match between moderator voice and participant demographic measurably improves rapport and depth. Read the Setup Interviewer docs for the full mode-selection guide.
Step 5: Test Conversation
The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, study introduction, the actual research questions, AI probing on your answers, and a wrap-up. Test conversations do not count against your usage limits, so you can test as many times as you need. After the test, a feedback screen asks “How was the conversation?” with Good or Needs Work options.
For market research specifically, the test you are running is whether the AI probes the business question, not just the surface answer. Most market research fails because the moderator does not ladder deep enough. If the test conversation lets you say “I just like that brand” without probing why, the live study will have the same problem. Run a test as a participant who gives a shallow surface answer, and verify the AI follows up with “What is it about that brand specifically?” and then “When did you first notice that?” and then “Have there been moments where another brand felt closer to that?”
Market-research-specific tip: if the AI accepts a surface answer, go back to Step 3 and tighten the Interviewer Guidelines with explicit instructions like “Probe at least three levels deep on category preference. Do not accept brand affinity as a final answer.” Then re-test. The Test Conversation docs cover what to look for during testing.
Step 6: Launch
The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review checklist sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click the Save and Launch button (the one with the rocket icon). Your study goes live immediately. You are redirected to the Study Dashboard.
The detail buyers miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step. This is intentional, because most teams want to set up the study config first, review it with stakeholders, and then trigger recruitment on a specific day.
Market-research-specific tip: for a one-time decision-supporting study, launch as soon as the discussion guide is locked. For an ongoing market research program, set a quarterly cadence at the calendar level so the Intelligence Hub starts compounding. Continuous quarterly cohorts beat annual blockbusters because category sentiment shifts faster than any annual study can catch. The Launch docs cover everything that can and cannot be changed post-launch.
Step 7: Invite Participants
This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to market research recruitment patterns.
Research Panel opens recruitment from the 4M+ panel across 50+ languages, with country and language selection happening here for multilingual studies. This is the main recruitment surface for market research, because most market research audiences are not in your CRM. Use it for category entries, brand perception studies, competitive consideration sets, and any audience where you do not have first-party contact info. Invite Your Customers lets you import contacts manually, via CSV, or directly from a synced CRM (Salesforce, HubSpot) using the Contacts tab and Segments tab. Use this for buyer-journey research on your existing customer base, churn-adjacent market work, or NPS deep-dives. Share Link generates a unique URL for personalized email outreach to specific contacts. Embed Widget drops always-on feedback collection into your customer portal or post-purchase email for continuous market signal.
Market-research-specific tip: for multi-market studies (entering North America from EU, scaling B2B from one geography to another), set country at this step and run parallel cohorts inside a single study. The Intelligence Hub will surface country-by-country differences automatically. The Recruiting overview covers all four methods side by side.
Step 8: Review Insights
Once interviews complete (the first ones inside hours, full cohorts within 48-72 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row to expand the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; click Regenerate Report as more land.
The third surface is the Intelligence Hub, accessed from the left navigation. Each Hub session is a workspace where you sync studies, upload supplementary files (PDFs, CSVs), and ask natural-language questions across everything in the room. For a continuous market research program, this is the compounding layer.
Market-research-specific tip: create a single Intelligence Hub session called “Market Research” at the start of the year, sync every quarterly cohort into it, and keep adding sources. Then ask questions like “How has buyer language about our category benefits evolved across the last three quarters?” The cross-study pattern recognition is the part that single studies cannot replicate, and it is the highest-leverage output of the program. The Intelligence Hub docs and the Reports docs cover the full output surface.
Mini case study
A consumer electronics brand ran the workflow above on 25 prospects considering category entry into smart-home audio. The team set up the study in a single afternoon: Brand Health template at Step 1, audience and decision briefed to Charles at Step 2, learning goals tightened at Step 3 to include competitor consideration and motivation laddering, Audio mode and category-usage screeners at Step 4, two test conversations at Step 5, study launched mid-week at Step 6.
Recruitment used the 4M+ panel filtered to US adults 25 to 44 with category-relevant purchase intent. All 25 interviews completed within 48 hours.
The Reports tab and the Intelligence Hub surfaced a finding that contradicted the pre-launch positioning. The team had assumed the wedge was sound quality. Sound quality came up as a primary driver in only 6 of 25 conversations. The dominant theme, present in 17 of 25 transcripts, was setup friction perception: prospects did not believe smart-home audio products would integrate cleanly with their existing devices. The team rebuilt the launch positioning around a “works with everything you already own” promise.
“We thought the question was sound quality. User Intuition interviewed 25 category prospects in 48 hours. Sound quality came up in only 6 conversations. The real wedge was integration friction, and we rebuilt the launch around it.”
Eric O., COO, RudderStack
What gets the best results from market research on User Intuition?
Tips for getting the most out of the workflow, gathered from the patterns that consistently produce decision-quality research.
Anchor every study to a decision. If you cannot name the decision the research will inform, slow down before Step 1. Decision-anchored briefs produce reports that get used. Open-ended exploration produces shelfware.
Keep the research plan tight. Aim for 4 to 6 main themes with 2 to 4 sub-questions each at Step 3. Plans with 8+ themes produce interviews longer than 25 minutes and completion rates drop sharply. Twelve total questions or fewer keeps completion above 70% on prospect cohorts.
Run two test conversations every time. One as a high-engagement participant, one as a low-engagement participant who gives one-word answers. The single highest-leverage 20 minutes you can spend before launch is catching shallow probing while it is still cheap to fix.
Pick voice over chat. Voice transcripts capture motivation language, hesitation, and the emotional emphasis that surveys cannot reach. Reserve chat for international cohorts where the participant’s language fluency makes typing a better experience.
Run continuous, not annual. 15 to 25 interviews per quarter at $20/interview on the Pro plan compounds into a sharper picture than one 60-interview annual blockbuster. The Intelligence Hub gets more useful every quarter you feed it.
Sync the Intelligence Hub once per quarter. Every completed cohort should land in the same Market Research session on a regular cadence. Cross-quarter pattern recognition is the highest-leverage output User Intuition produces, and only works if every study lands in the same session. See the comparison of market intelligence platforms for the budgeting case behind quarterly cadence.
Route insights inside 7 days. Findings degrade fast. As soon as the report and the Intelligence Hub are populated, route findings to specific owners (brand, product, marketing, sales enablement) with a defined SLA. The same compounding-intelligence logic applies in adjacent practices like consumer insights programs. The studies that move the business are the ones whose findings reach a decision-maker the same week.
FAQ
The frontmatter FAQ block above answers the most common questions. The full canonical answers also surface inline in the workflow above.