Consumer insights on User Intuition is an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. A brand manager, insights manager, or marketing leader can launch a consumer research wave in an afternoon and watch transcripts, satisfaction-rated calls, and synthesized reports show up in the Intelligence Hub within 48-72 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, making a continuous monthly tracker viable for mid-market brands. This tutorial walks through each step with the verbatim labels you will see in the product.
For the broader methodology, see the methodology guide. For headline pricing and proof points against agencies and panels, see the User Intuition consumer insights platform page. This piece stays in the workflow lane: what to click, where to click it, and why each setting matters when the cohort you care about is consumers in a category.
Consumer insights cover brand perception, motivations, and how a category fits into someone’s life. If you’re researching the in-store decision moment or category management, see How to Run Shopper Insights with User Intuition.
What is consumer insights, briefly?
Consumer insights is the systematic practice of interviewing the people who buy and use a category to understand what motivates their choices, how the category fits into their lives, and how a brand sits in their head relative to alternatives. It differs from customer insights (which studies existing customers) and shopper insights (which studies the in-store moment). Consumer insights is one layer up: who the consumer is, what they want from the category, and which brand wins their consideration when the occasion calls for it.
Done well, it answers four questions every brand and insights team needs answered every quarter. What jobs is the consumer hiring this category to do? Where does our brand sit in the consideration set, and on what mental shortcut do we win or lose? What motivations and identity drivers pull the trigger on purchase, beyond stated reasons? How is the category shifting, and what leading-indicator behaviors should we be watching?
The mechanism that makes consumer insights work is depth. A 30-minute conversation with 5 to 7 levels of structured probing surfaces emotional and identity-driven motivations a 5-minute survey will never reach. Category occasions, brand mental availability, the role a product plays in someone’s day, the anxieties that make them switch, these are the patterns underneath rational stated preferences, and they are what move share when you act on them.
Modern AI-moderated platforms compress the timeline from weeks to 48-72 hours and the cost from $15K to $27K per wave to $200 floor pricing. Instead of two big annual studies, brand teams can run 30 to 50 consumer interviews every month, feed every transcript into a searchable Intelligence Hub, and watch category and brand intelligence compound wave by wave.
Why doing this on User Intuition is different
Most consumer insights tooling makes you choose between depth and speed. Surveys are fast and shallow. Agencies are deep and slow. Online panels end up shallow on both. User Intuition delivers depth at speed.
The AI moderator runs 30-minute structured laddering conversations and probes 5 to 7 levels deep on category occasions, brand associations, motivation hierarchies, and the moments that tipped a purchase. Every conversation runs the same methodology, so a 30-interview study and a 300-interview study are directly comparable. There is no moderator bias, because there is no moderator in the room. Consumers report higher candor with AI than with vendor-affiliated researchers, and the 98% participant satisfaction rate reflects how the experience lands.
Recruitment is built in. The 4M+ research panel covers 50+ languages and lets you target by category usage, demographic, geography, and segment without building a list. For D2C brands, the embed widget drops always-on consumer feedback into product pages, thank-you pages, or customer portals. Existing customers can flow in via CRM contacts integration, but for general-population work, the panel is usually the right starting point.
Synthesis runs in the same product. Once interviews complete, the Reports tab generates Executive Summary, Top Insights, Detailed Analysis, and Recommendations, all grounded in verbatim consumer quotes. The Intelligence Hub layers across studies so wave-over-wave patterns surface compounding insights single-study reports miss. At $20/interview, the stack is sustainable as a continuous program at mid-market scale, where agency programs typically die after one or two cycles. See the User Intuition consumer insights platform page for the full positioning.
The 8-step walkthrough
The product flow is the same for every study type. The choices below are tuned for consumer insights specifically.
Step 1: Create Study
Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. For most consumer insights work, pick the Brand Health card. The selected card highlights with a dark background. Click Save & Continue.
The Brand Health template pre-loads conversation flow, screener defaults, and learning goals tuned for brand perception, loyalty drivers, competitive positioning, and shifting attitudes. If your research question is closer to category usage, motivation hierarchies, jobs-to-be-done, or new-product validation, pick Custom Design and build the plan from a blank canvas with Charles. Both routes funnel into the same Customize Plan step.
Consumer-insights-specific tip: pick “Brand Health” when the business question is “What do people think about us?” and “Custom Design” when the question is “What’s driving category behavior?” or “Why does this segment switch?” See Choose Your Study Type for the full template comparison.
Step 2: Customize Plan
The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines, all in a panel on the right.
For consumer insights, give Charles four things in your opening message. First, the brand and category: “We are {Brand Name} in the premium ready-to-drink coffee category.” Second, the audience: “Coffee drinkers, 25 to 45, urban and suburban North America, including category buyers, occasional users, and lapsed users.” Third, the business question: “Our share is declining among the 25 to 35 segment and we want to understand why.” Fourth, the competitor set: “We compete with {Competitor A} in mainstream and {Competitor B} in premium.” Charles returns a research plan asking about category occasions, brand mental availability, motivation drivers, and switching triggers.
Consumer-insights-specific tip: tell Charles which jobs-to-be-done hypotheses you already hold. He uses your hypotheses to design probes that test or break them, which keeps the interviews from drifting into ground you have already covered. Read the Customize Plan docs for the full set of context, scope, and objective questions Charles will ask.
Step 3: Review Goals
The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q2 Premium RTD Coffee, 25-45 Consumer Tracker”; rename it to fit your internal naming convention). Below that, every section of the plan, Objective through Interviewer Guidelines, is editable. Highlight, retype, add, delete. Changes save automatically.
This is the step where most teams ensure the plan captures both what consumers DO (behavior) and why (motivations, occasions, jobs-to-be-done). Consumer insights studies fail when they collect attitudes without underlying motivations, or behaviors without the meaning consumers attach to them. Add a sub-question under Key Sub-Questions like “Walk me through the last time you bought {category}. What occasion was it for, and what was going through your mind when you chose {brand}?” That single question opens the occasion-and-motivation territory pure attitude tracking misses.
Consumer-insights-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Longer plans mean longer interviews mean lower completion rates. The Review Study docs have the full editing toolbar reference. Click Save & Continue when the plan reads the way you want.
Step 4: Setup Interview
The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male, American) and Clara (Female, American). Click the play button on each card to preview the voice. Pick the one whose tone fits your consumer audience; click the card to select it. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for consumer insights because voice transcripts capture emotional resonance behind associations, occasions, and routines that chat will not.
Below the format toggles, add the high-leverage consumer insights screeners: a category-usage question (“How often do you buy {category}?”) to filter heavy, occasional, lapsed, and non-buyers, and a brand-awareness question (“Which of these brands have you bought in the last six months?”) to capture consideration-set context. Anyone outside your audience definition drops out before consuming a credit. Language and country targeting happens later in Invite Participants, not here.
Consumer-insights-specific tip: pick Elliot or Clara to roughly match the modal demographic of your audience. The match between moderator voice and participant demographic measurably improves rapport and depth, which matters more for consumer insights than other study types because identity-driven language surfaces only when the consumer feels at ease. Read the Setup Interviewer docs for the full mode-selection guide.
Step 5: Test Conversation
The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, study introduction, the actual research questions, AI probing on your answers, and a wrap-up. Test conversations do not count against your usage limits, so you can test as many times as you need. After the test, a feedback screen asks “How was the conversation?” with Good or Needs Work options.
For consumer insights, run two test conversations before launch. Run the first as a heavy-category buyer who chose your brand, and listen for whether the AI probes the why behind the choice (it should not accept “I like the taste” as a final answer; it should ladder into occasion, identity, and emotional resonance). Run the second as a lapsed user who switched away, and listen for whether the AI probes past the surface complaint into category job-to-be-done and identity fit.
Consumer-insights-specific tip: if the AI accepts a surface answer, go back to Step 3 and tighten the Interviewer Guidelines with explicit instructions like “When participants give a rational reason like price or taste, probe at least three levels deeper into occasion, alternative considered, and the feeling associated with the choice.” Then re-test. The Test Conversation docs cover what to look for.
Step 6: Launch
The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click Save and Launch (the rocket icon). Your study goes live immediately and you are redirected to the Study Dashboard.
The detail teams miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step. This is intentional, because most teams want to set up the config first, review it with leadership, and trigger recruitment on a specific day.
Consumer-insights-specific tip: launch mid-month for highest consumer completion rates. Match cadence to the business question: monthly waves of 30 interviews for ongoing brand and category trackers, ad hoc 75 to 150 interview deep dives for category exploration or new-product validation. The Launch docs cover what can and cannot be changed post-launch.
Step 7: Invite Participants
This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to consumer insights recruitment patterns.
Research Panel is the primary route for consumer insights. It opens recruitment from the 4M+ panel across 50+ languages, with category usage, demographic, and segment filters that match the audience you set in Step 2. Country and language selection happens here for multi-market work (NA, LATAM, EU, APAC). For a brand running parallel waves in five markets, set five panel pulls inside the same study and compare findings side by side in the Intelligence Hub. Embed Widget drops always-on consumer feedback collection into D2C product pages, thank-you pages, or post-purchase emails, capturing consumers at peak engagement. Invite Your Customers imports contacts manually, via CSV, or directly from a synced CRM (Salesforce, HubSpot) using the Contacts and Segments tabs, useful for branded-customer research as a complement to panel work. Share Link generates a unique URL for a personalized email, a Zapier workflow, or a customer-facing campaign.
Consumer-insights-specific tip: combine methods. Panel recruitment for the unbiased general-category read, embed widget for always-on D2C consumer signal, and CRM segments for branded-customer comparison. The Recruiting overview covers all four methods side by side. The Research Panel docs cover the panel filters in full.
Step 8: Review Insights
Once interviews complete (first ones inside hours, full waves within 48-72 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row to expand the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; click Regenerate Report as more land.
The third surface is the Intelligence Hub in the left navigation. Each Hub session is a workspace where you sync studies, upload supplementary files (PDFs, CSVs), and ask natural-language questions across everything in the room. For a continuous consumer insights program, this is the compounding layer that delivers most of the long-run value.
Consumer-insights-specific tip: create a single Intelligence Hub session called “Consumer Insights, {Category}” at the start of the year, sync every monthly wave into it, and keep adding sources. Then ask questions like “How has consumer language about our brand evolved across the last six waves?” or “What category occasions did 25 to 35 buyers describe most often, and how is that shifting?” Session-level pattern recognition is what single-wave reports cannot replicate. The Intelligence Hub docs and the Reports docs cover the full output surface.
Mini case study
A mid-market premium beverage brand ran the workflow on 75 consumers across heavy, occasional, and lapsed category users. The team set up the study in a single afternoon: Brand Health template at Step 1, brand and category context briefed to Charles at Step 2, learning goals tightened at Step 3 to cover category occasions and identity drivers, Audio mode and the category-usage screener configured at Step 4, two test conversations at Step 5, launched mid-week at Step 6.
Recruitment used the Research Panel filtered by category buyers, 25 to 45 urban and suburban, across three usage segments. All 75 interviews completed within 60 hours.
The Reports tab and the Intelligence Hub surfaced a finding the brand team had been missing. Tracking dashboards showed satisfaction holding steady, but the verbatim conversations revealed declining emotional connection among 25 to 35 lapsed users, the leading defection indicator flat satisfaction scores were masking. The dominant theme, present in 41 of 75 transcripts, was that the brand had become “the safe choice my parents drink” rather than a brand the cohort identified with. The team rebuilt brand positioning around a younger occasion-and-identity wedge for the next campaign cycle.
“We used to run 2 consumer studies a year at $20K each and wait 6 weeks for results. Now we run monthly research in 48-72 hours and our team actually uses the findings because they’re searchable and always accessible. The knowledge compounds.”
Stephane N., Head of Insights and Analytics, Microsoft
What gets the best results from consumer insights on User Intuition?
Tips gathered from the patterns that consistently produce share movement.
Keep the research plan tight. Aim for 4 to 6 main themes with 2 to 4 sub-questions each at Step 3. Plans with 8+ themes produce interviews longer than 25 minutes and completion rates drop sharply. Twelve total questions or fewer keeps completion above 70%.
Run two test conversations every time. One as a heavy-category loyalist, one as a skeptical lapsed user. The highest-leverage 20 minutes you can spend before launch is catching shallow probing while it is still cheap to fix.
Pick voice over chat. Voice transcripts surface emotional resonance behind brand associations and category occasions that chat compresses out. Reserve chat for international cohorts where the participant’s language fluency makes typing a better experience.
Capture both behavior and motivation. A study that tracks attitudes without behaviors, or behaviors without motivations, leaves the most actionable layer untouched. Step 3 should include at least one sub-question on the last category purchase and the occasion behind it.
Run continuous, not annual. Thirty interviews per month at $20/interview on the Pro plan compounds into a sharper category and brand picture than two big annual waves. See the best consumer insights platforms comparison for the cost case behind the monthly cadence.
Stratify the panel pull at Step 7. Set explicit quotas for heavy, occasional, lapsed, and non-buyer cohorts in proportion to your business question. A 5-3-2-1 ratio across the four cohorts is a strong default for general-category trackers.
Sync the Intelligence Hub once per month. Every wave should land in the same category session on a regular cadence. Cross-wave pattern recognition only works if every wave lands in the same session. The same logic powers ongoing programs in CPG consumer research, where category truths sharpen quarter over quarter.
Route insights inside 7 days. Findings degrade fast. As soon as the report and the Intelligence Hub are populated, route findings to specific owners (brand strategy, product, marketing, channel partners) with a defined SLA. The studies that move share are the ones whose findings reach a decision-maker the same week.
FAQ
The frontmatter FAQ block above answers the most common questions. Full canonical answers also surface inline in the workflow.