Product innovation research on User Intuition is an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. A product manager, innovation director, or R&D lead can launch a consumer-interview program in an afternoon, then watch transcripts, satisfaction-rated calls, and synthesized reports show up inside the Intelligence Hub within 24-48 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, which makes a continuous monthly program viable across a multi-stage CPG innovation pipeline. This tutorial walks through each of the eight steps with the verbatim labels you will see in the product, the choices that matter most for innovation research, and the links to the underlying documentation.
For the broader methodology (how innovation research differs from concept testing and how to design a longitudinal program), see the methodology guide. For pricing, proof points, and the comparison against Suzy, dscout, Quantilope, and Numerator, see the User Intuition product innovation research solution page. This piece stays in the workflow lane: what to click, where, and why each setting matters when your cohort spans current category buyers, near-adjacent buyers, and non-buyers.
What is product innovation research, briefly?
Product innovation research is the systematic practice of interviewing consumers across the full innovation pipeline to understand what category problem you are solving, who you are solving it for, and how the product needs to feel in their hands. It is longitudinal by design. Unlike concept testing, which evaluates a specific concept at a specific moment, innovation research runs three loops: opportunity discovery before the pipeline opens, concept iteration mid-pipeline, and post-launch tracking on first SKUs.
Done well, innovation research answers four questions every R&D or NPD team needs answered every quarter. First, what category problem is unmet today, and who feels it most acutely? Second, when consumers see our concept, what do they think it is for? Third, what would convince them to switch? Fourth, after launch, where does the next iteration go?
The mechanism that makes innovation research work is depth across stages. A 30-minute conversation with a category buyer, with 5 to 7 levels of structured probing, surfaces both the FUNCTIONAL JOB (what category problem the product solves) and the EMOTIONAL JOB (what the buyer feels when they use it). Innovation research that only collects feature feedback misses the why-it-matters layer, and that is where most CPG NPD pipelines die.
Modern AI-moderated platforms compress the timeline from weeks to 24-48 hours and the cost from $15K+ per study to $200 floor pricing. Instead of one annual innovation study, mid-to-large consumer brands can run 10 to 15 interviews every month aligned to the pipeline, feed every transcript into a searchable Intelligence Hub, and watch category insight compound stage by stage.
Why doing this on User Intuition is different
Most innovation tooling forces a choice between depth and speed. Quant platforms like Quantilope and Suzy are fast and shallow. Agency-led qualitative is deep and slow. User Intuition delivers both.
The AI moderator runs 30-minute structured laddering conversations and probes 5 to 7 levels deep on category mental models, switching triggers, and the moments that reveal an unmet need. Every conversation runs the same methodology, so a study with 10 interviews and a study with 300 interviews are directly comparable. There is no moderator bias, because there is no human moderator in the room. Consumers report higher candor with AI than with brand-affiliated researchers, and the 98% participant satisfaction rate reflects how the experience lands.
Recruitment is built in. The 4M+ research panel covers 50+ languages and lets you target by category behavior, demographics, and geography without building a list. For latent-need discovery, the panel recruits slightly beyond your current category, since adjacent-category buyers often surface unmet needs your customers have adapted around. Existing customer lists flow in via the Contacts integration, the unique study link, or the embed widget for always-on post-launch listening.
Synthesis runs in the same product. Once interviews complete, the Reports tab generates an Executive Summary, Top Insights, Detailed Analysis, and Recommendations, grounded in verbatim consumer quotes. The Intelligence Hub layers across studies, so opportunity discovery + concept testing + post-launch tracking compound into a category-level insight engine that any single-stage study would miss. Pricing at $20/interview makes the stack sustainable across multi-stage pipelines, where agency programs typically die after one or two cycles. See the product innovation research solution page for the full positioning.
The 8-step walkthrough
The product flow is the same for every study type. The choices below are tuned for product innovation research specifically.
Step 1: Create Study
Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. Pick the template that matches your innovation pipeline stage. Custom Design is the right pick for opportunity discovery (pre-pipeline) and most concept iteration studies, because innovation questions rarely fit the standard templates. Brand Health is the right pick when post-launch tracking on first SKUs is the goal and you want category perception drift over time. The selected card highlights with a dark background. Click Save & Continue.
The Custom Design template opens a blank canvas where Charles builds the entire research plan from your brief. It is also the right pick for line-extension testing and packaging research. The Win/Loss, Churn, and NPS templates are not the right fit for innovation work; they are tuned for revenue and retention cohorts.
Innovation-research-specific tip: pick by pipeline stage. “What are we missing in the market?” framing for opportunity discovery, “What do people think of this concept?” for mid-pipeline iteration, “What is driving satisfaction?” for post-launch tracking. See Choose Your Study Type for the full template comparison.
Step 2: Customize Plan
The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan in a panel on the right: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines.
For innovation research, give Charles three things in your opening message. First, the innovation thesis: “We believe there is unmet need in {category} around {functional problem}, and we are exploring whether {product direction} would solve it.” Second, the audience: “Current category buyers, near-adjacent buyers from {adjacent category}, and non-buyers we want to convert.” Third, the pipeline stage: opportunity discovery, mid-pipeline concept reaction, or post-launch tracking. Charles returns a research plan that probes category mental models, current solutions, frustrations, and willingness to switch.
Innovation-research-specific tip: tell Charles which category mental models you think you understand and which you suspect you have wrong. He uses your hypotheses to design probes that test or break them, which is how innovation research avoids confirming what the team already believes. Read the Customize Plan docs for the full set of questions Charles will ask.
Step 3: Review Goals
The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q2 Snack Innovation, Opportunity Discovery”; rename it to your internal pipeline naming convention). Every section of the plan, Objective through Interviewer Guidelines, is editable. Highlight, retype, add, delete. Changes save automatically.
This is the step where innovation research lives or dies. Edit the goals to capture both the FUNCTIONAL JOB (what category problem the product solves) and the EMOTIONAL JOB (what the buyer feels when they use it). The default plan often skews toward feature feedback. Add explicit Key Sub-Questions like “What does this product help you accomplish that you cannot accomplish today?” and “When you imagine using this product, what does it make you feel about yourself?” Functional without emotional gives you feature lists. Emotional without functional gives you vibes. You need both.
Innovation-research-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Longer plans mean longer interviews mean lower completion rates. The Review Study docs have the full editing toolbar reference. Click Save & Continue when the plan reads the way you want.
Step 4: Setup Interview
The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male, American) and Clara (Female, American). Click the play button on each to preview. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for innovation research because voice captures spontaneous reactions, analogies, and the emotional language that reveals consumer mental models around a category.
Below the format toggles is the screener questions section. Add the category-buyer screener: “How often do you purchase {category} products?” with answer options for frequent, occasional, and never. For latent-need discovery, also screen for adjacent-category behavior so you capture buyers outside your current category who often surface unmet needs. For multilingual studies, language and country targeting happens later in the Invite Participants step.
Innovation-research-specific tip: pick Clara for category audiences that skew female-buyer (beauty, home care, family snacks) and Elliot for those that skew male-buyer (auto, men’s grooming, energy drinks). The match between moderator voice and consumer demographic measurably improves rapport. Read the Setup Interviewer docs for the full mode-selection guide.
Step 5: Test Conversation
The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, study introduction, the actual research questions, AI probing on your answers, and a wrap-up. Test conversations do not count against your usage limits, so you can test as many times as you need. After the test, a feedback screen asks “How was the conversation?” with Good or Needs Work options.
For innovation research, run two test conversations before launch. Run the first as a current category buyer who is happy with their existing solution, and listen for whether the AI probes hard on what they would have to see to even consider switching. Run the second as a near-adjacent buyer who has adapted around your category problem with a workaround from another category, and listen for whether the AI probes that workaround as a signal of latent unmet need.
Innovation-research-specific tip: if the AI accepts a surface answer about feature preferences without probing mental model and emotional resonance underneath, go back to Step 3 and tighten the Interviewer Guidelines with “Probe at least three levels deep when participants describe what a product does, asking what it helps them accomplish and what it makes them feel.” Then re-test. The Test Conversation docs cover what to look for during testing.
Step 6: Launch
The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review checklist sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click the Save and Launch button (the one with the rocket icon). Your study goes live immediately. You are redirected to the Study Dashboard.
The detail product teams miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step. This is intentional, so most teams set up the config first, review it with R&D and product leads, then trigger recruitment on a specific day aligned to the pipeline.
Innovation-research-specific tip: align launch cadence to your pipeline. For longitudinal programs, set quarterly cadence aligned to stage gates so opportunity discovery feeds the next quarter’s concept iteration which feeds the next quarter’s launch tracking. For sprint-style mid-pipeline checks, launch within days of the iteration locked. The Launch docs cover what can and cannot be changed post-launch.
Step 7: Invite Participants
This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to innovation research recruitment.
Research Panel is the primary recruitment surface for innovation research and opens recruitment from the 4M+ panel across 50+ languages, with country and language selection happening here for multi-region innovation. Filter by category buyer behavior, frequency, and openness to new products. For latent-need discovery, recruit slightly beyond your current category; adjacent-category buyers often surface unmet needs your customers have already adapted around. Invite Your Customers imports contacts manually, via CSV, or directly from a synced CRM (Salesforce, HubSpot) using the Contacts tab and Segments tab. Use this for post-launch tracking on first SKUs. Share Link generates a unique URL for personalized outreach to category influencers or R&D advisors. Embed Widget drops always-on feedback collection into your DTC site or post-purchase email for continuous post-launch listening.
Innovation-research-specific tip: combine methods across pipeline stages. Panel for opportunity discovery and concept iteration, customer invites for post-launch tracking. The Recruiting overview covers all four side by side.
Step 8: Review Insights
Once interviews complete (the first ones inside hours, full cohorts within 24-48 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row to expand the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; click Regenerate Report as more land.
The third surface is the Intelligence Hub, accessed from the left navigation. Each Hub session is a workspace where you sync studies, upload supplementary files (PDFs, CSVs), and ask natural-language questions across everything in the room. For innovation research, the Hub is critical: cluster reactions across pipeline stages into a category-level insight engine that single-stage research cannot replicate.
Innovation-research-specific tip: create a single Intelligence Hub session called “{Category} Innovation Pipeline” at the start of the year, sync every quarterly study into it, and keep adding sources. Then ask questions like “How has consumer language about our category problem evolved from opportunity discovery in Q1 to concept testing in Q2 to launch tracking in Q3?” The Intelligence Hub docs and the Reports docs cover the full output surface.
Mini case study
A mid-market CPG snack brand ran the workflow on a three-stage innovation program for a new better-for-you snack line. Stage one was opportunity discovery: 24 interviews with current category buyers and adjacent-category buyers (frozen meals, protein bars) who had adapted around afternoon hunger. Custom Design template at Step 1, innovation thesis briefed to Charles at Step 2, learning goals at Step 3 capturing both functional job (afternoon energy, no crash) and emotional job (feeling like a parent who has it together), Audio mode and category screener at Step 4, two test conversations at Step 5, launched at Step 6.
Stage two ran 90 days later: 22 interviews on three concept variants, same audience, same Intelligence Hub session. Stage three ran 90 days after launch: 18 interviews with first-SKU buyers via the Contacts integration plus 12 panel interviews for non-buyer perception. All synced into the same “Better-For-You Snack Innovation Pipeline” session.
The Intelligence Hub surfaced a finding none of the three single-stage reports caught alone. The functional job was clear from stage one. The emotional job that came up in stage one but got lost in stage two was about parent identity, not health. The brand had been positioning the launch around clean ingredients. The Hub showed the consumer language was about “having it together” and “not falling apart at 4pm.” The team rebuilt the launch messaging around that emotional job. Trial rate on the second wave improved by 31% versus the first.
“User Intuition turned our product roadmap from a stakeholder debate into a data-backed strategy. We tested 5 concepts in 2 weeks. One of them became our {fastest-growing feature}.”
Eric O., COO, RudderStack
What gets the best results from product innovation research on User Intuition?
Tips for getting the most out of the workflow, gathered from the patterns that consistently produce category-level insight movement.
Keep the research plan tight. Aim for 4 to 6 main themes with 2 to 4 sub-questions each at Step 3. Plans with 8+ themes produce interviews longer than 25 minutes and completion rates drop sharply. Twelve total questions or fewer keeps completion above 70% on consumer cohorts.
Capture both jobs at Step 3. Functional and emotional. Innovation research that only collects feature feedback misses the why-it-matters layer. Add explicit goals for both, every time.
Run two test conversations every time. One as a happy current category buyer, one as an adjacent-category buyer with a workaround. The highest-leverage 20 minutes you can spend before launch is catching shallow probing on category mental models while it is cheap to fix.
Pick voice over chat. Voice transcripts surface the analogies, hesitations, and emotional language that reveal mental models. Chat compresses all of that out. Reserve chat for international cohorts where the participant’s language fluency makes typing a better experience.
Run the program longitudinally. 10 to 15 interviews per month at $20/interview compounds into a sharper category picture than one 40-interview annual study. The Intelligence Hub gets more useful every month you feed it.
Sync the Intelligence Hub once per pipeline stage. Every completed study (opportunity discovery, concept testing, post-launch tracking) should land in the same {Category} session. Cross-stage pattern recognition is the highest-leverage output User Intuition produces, and it only works if every study lands in the same session. See the product leader’s guide to innovation research and the AI product innovation research overview for the program-design case.
Route insights inside 7 days of each stage. Findings degrade fast. As soon as the report and Intelligence Hub are populated, route specific findings to specific owners (R&D, brand, packaging, NPD pipeline owner) with a defined SLA. This is the same pattern that connects to concept testing programs. The studies that move category share are the ones whose findings reach a decision-maker the same week.
FAQ
The frontmatter FAQ block above answers the most common questions. The full canonical answers also surface inline in the workflow above.