← Insights & Guides · 12 min read

How to Run Competitive Intelligence with User Intuition

By

Competitive intelligence on User Intuition is an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. The product is built so a Product Marketing lead, a Competitive Intelligence specialist, or a sales enablement manager can launch a buyer-perception program in an afternoon, then watch transcripts, satisfaction-rated calls, and synthesized reports show up inside the Intelligence Hub within 48-72 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, which means a continuous quarterly program is financially viable even at $20M to $80M ARR. This tutorial walks through each of the eight steps with the verbatim labels you will see in the product, the choices that matter most for competitive intelligence specifically, and the links to the underlying documentation.

For the broader methodology (how to design a CI program, what to monitor, how to route insights), see the methodology guide. For the headline pricing, the proof points, and the comparison against Klue and Crayon, see the User Intuition competitive intelligence platform page. This piece stays in the workflow lane: what to click, where to click it, and why each setting matters when the cohort you care about is recent category buyers who chose between you and named competitors.

What is competitive intelligence, briefly?


Competitive intelligence is the systematic practice of understanding why buyers choose competitors, not just tracking what those competitors do publicly. It sits between two failed substitutes: monitoring tools like Klue and Crayon, which scrape competitor websites and tell you what changed but never why buyers reacted, and consultant-led studies, which take 4 to 8 weeks and cost $50K to $200K per cohort. Neither produces continuous intelligence.

Done well, competitive intelligence answers four questions every revenue and product team needs answered every quarter. First, why are buyers in our core segment choosing the competitor they choose? Second, where do we own the perception, and where does the competitor own it? Third, what triggers a switch in either direction? Fourth, what changed since last quarter?

The mechanism that makes competitive intelligence work is depth. A 20-minute conversation with a recent category buyer, with 5 to 7 levels of structured probing, surfaces the buyer mental model that a feature checklist will never reach. Implementation confidence, narrative simplicity, perceived momentum, champion confidence, these are the patterns underneath the feature comparison. They show up only in conversation.

Modern AI-moderated platforms compress the timeline from weeks to 48-72 hours and the cost from $25K+ subscriptions to $200 floor pricing. Instead of one annual competitive snapshot, mid-market teams can run 20 to 30 buyer interviews every quarter, feed every transcript into a searchable Intelligence Hub, and watch the competitive picture compound study by study.

Why doing this on User Intuition is different


Most competitive intelligence tooling makes you choose between depth and breadth. Monitoring tools are broad and shallow. Consultants are deep and slow. User Intuition is built to deliver depth at the cadence that matters.

The AI moderator runs 20-minute structured laddering conversations and probes 5 to 7 levels deep on the competitor mental model, the trigger that opened the evaluation, and the moments that tipped the decision. Every conversation is run by the same methodology, so a study with 20 interviews and a study with 200 interviews are directly comparable. There is no rep bias, because there is no rep in the room. Buyers report higher candor with AI than with vendor-affiliated researchers, and the 98% participant satisfaction rate reflects how the experience lands.

Recruitment is built in. The 4M+ research panel covers 50+ languages and lets you target by demographics, geography, and recent purchase behavior without building a list. For competitive intelligence specifically, recruiting recent category buyers from the panel matters more than for any other study type, because it sources buyers who never entered your pipeline. Lost-deal contacts flow in from Salesforce or HubSpot via the Contacts integration when you want extra context, and the embed widget drops always-on competitive feedback into your post-deal email.

Synthesis runs in the same product. Once interviews complete, the Reports tab generates an Executive Summary, Top Insights, Detailed Analysis, and Recommendations, all grounded in verbatim buyer quotes. The Intelligence Hub layers across studies, so quarter one and quarter two patterns surface compounding insights any single-study report would miss. Pricing at $20/interview keeps the stack sustainable as a continuous program at mid-market scale. See the competitive intelligence solution page for the full positioning.

The 8-step walkthrough


The product flow is the same for every study type. The choices below are tuned for competitive intelligence specifically.

Step 1: Create Study

Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. For competitive intelligence, the right template depends on the question. Pick Win/Loss Analysis for direct head-to-head loss research with a defined competitor set. Pick Brand Health for relative brand perception across a category. Pick Churn Analysis for competitive churn-out research where customers left you for a named competitor. The selected card highlights with a dark background. Click Save & Continue.

Each template pre-loads the conversation flow, screener defaults, and learning goals tuned for that shape of question. Starting from the right template saves 10 minutes versus Custom Design. For most competitive intelligence work, Win/Loss Analysis is the workhorse, because the mechanics of probing competitive perception are similar whether the deal closed-lost in your pipeline or never entered it.

CI-specific tip: if you are studying multiple competitors, run one study per primary competitor rather than one covering all of them. Cross-study comparison in the Intelligence Hub is sharper. See Choose Your Study Type for the full template comparison.

Step 2: Customize Plan

The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines, all in a panel on the right.

For competitive intelligence, give Charles three things in your opening message. First, the competitor set: “We are studying buyers choosing between us and {Competitor A} and {Competitor B} in the past 6 months.” Second, the buying contexts: “Mid-market B2B SaaS, ACV between $50K and $250K, North America.” Third, the decision the work informs: “This research feeds a positioning shift and a battlecard refresh for sales enablement.” Charles returns a plan that asks about evaluation triggers, consideration set order, decision criteria, and the moments that tipped the decision.

CI-specific tip: tell Charles which competitive narratives you suspect are driving losses. He uses your hypotheses to design probes that test or break them. Read the Customize Plan docs for the full set of questions Charles will ask.

Step 3: Review Goals

The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q2 Competitive Perception, Mid-Market SaaS”; rename it to fit your internal naming convention). Below that, every section of the plan, Objective through Interviewer Guidelines, is editable. Highlight, retype, add, delete. Changes save automatically.

This is the step where competitive intelligence diverges from generic win-loss. Edit Key Sub-Questions to capture two specific dimensions: WHY-not-us (the head-to-head loss reason at the level of buyer mental model, not feature checklist) AND WHEN-they-considered-us (the consideration set order and the trigger that opened the evaluation). Add a sub-question like “Walk me through the three vendors you seriously evaluated, in the order you considered them, and what each one did to earn or lose that consideration.” Most competitive intelligence collects feature comparisons. Great competitive intelligence surfaces buyer mental models.

CI-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Plans longer than 12 total questions push interviews past 25 minutes and drop completion rates. The Review Study docs have the full editing toolbar reference. Click Save & Continue when the plan reads the way you want.

Step 4: Setup Interview

The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male, American) and Clara (Female, American). Click the play button on each card to preview the voice. Pick the one whose tone fits your buyer audience; click the card to select it. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for competitive intelligence because voice transcripts surface tone, hesitation, and emotional emphasis around competitive comparisons that chat will not.

Below the format toggles is the screener questions section. Add the highest-leverage CI screener: “In the last 6 months, did you evaluate {our brand} alongside {Competitor X}?” Anyone who answers no drops out before consuming a credit, so every interview that completes is a qualified comparison buyer. For multilingual studies, language and country targeting happens later in the Invite Participants step, not here.

CI-specific tip: pick Elliot for buyer audiences that skew male and Clara for audiences that skew female. The match between moderator voice and participant demographic measurably improves rapport and depth, which matters more for competitive intelligence because candor on competitor perception drives the highest-leverage insight. Read the Setup Interviewer docs for the mode-selection guide.

Step 5: Test Conversation

The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, study introduction, the actual research questions, AI probing on your answers, and a wrap-up. Test conversations do not count against your usage limits, so you can test as many times as you need. After the test, a feedback screen asks “How was the conversation?” with Good or Needs Work options.

For competitive intelligence specifically, run the test as a buyer who chose a named competitor for a non-feature reason (“their CEO felt more credible on the demo call” or “their onboarding plan was more concrete”). Listen for whether the AI probes past the surface into the buyer mental model. Competitive intelligence that just collects feature scores is wasted.

CI-specific tip: if the AI accepts a feature-checklist answer, go back to Step 3 and tighten the Interviewer Guidelines with instructions like “When participants describe their decision in feature terms, probe at least three levels deeper into emotional drivers.” Then re-test. The Test Conversation docs cover what to look for.

Step 6: Launch

The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review checklist sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click the Save and Launch button (the one with the rocket icon). Your study goes live immediately. You are redirected to the Study Dashboard.

The detail buyers miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step. This is intentional, because most teams want to set up the study config first, review it with the team, and then trigger recruitment on a specific day.

CI-specific tip: for ongoing competitive programs, set a quarterly cadence aligned to your sales planning cycle, so the perception data lands the same week as your QBR. For one-time launches around a specific event (a competitor repositioning, a new entrant, your own positioning shift), launch immediately and recruit the same day. The Launch docs cover everything that can and cannot be changed post-launch.

Step 7: Invite Participants

This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to competitive intelligence recruitment patterns.

Research Panel opens recruitment from the 4M+ panel across 50+ languages, with country and language selection happening here for multi-region competitive maps. This is the workhorse for CI: recruiting recent category buyers from the panel avoids the sales-narrative bias that contaminates lost-deal interviews from your own pipeline. Invite Your Customers imports contacts manually, via CSV, or directly from Salesforce or HubSpot using the Contacts and Segments tabs. Useful for layering in lost-deal context. Share Link generates a unique URL you can paste into a personalized email to a known lost buyer. Embed Widget drops always-on competitive feedback collection into your customer portal or post-deal email.

CI-specific tip: combine methods. Panel for the bulk of the cohort, share links for 3 to 5 lost-deal contacts where you want the deeper context. The Recruiting overview covers all four methods.

Step 8: Review Insights

Once interviews complete (the first ones inside hours, full cohorts within 48-72 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row to expand the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; click Regenerate Report as more land.

The third surface is the Intelligence Hub, accessed from the left navigation. Each Hub session is a workspace where you sync studies, upload supplementary files (PDFs, CSVs), and ask natural-language questions across everything in the room. The Hub clusters mentions of each competitor across studies and surfaces the competitive frame that is hardest to see from inside.

CI-specific tip: create a single Intelligence Hub session called “Competitive Intelligence” at the start of the year, sync every quarterly study into it, and keep adding sources. Then ask cross-study questions like “How has buyer language about Competitor X’s onboarding evolved across the last three quarters?” Pair this with the Reports module to generate weekly battlecard updates for sales enablement. The Intelligence Hub docs and the Reports docs cover the full output surface.

Mini case study


A mid-market B2B SaaS team ran the workflow above on 28 recent category buyers who had chosen between their product and two named competitors over the past six months. The team set up the study in a single afternoon: Win/Loss Analysis template at Step 1, competitor set briefed to Charles at Step 2, learning goals tightened at Step 3 to capture WHY-not-us and WHEN-they-considered-us, Audio mode and the recent-evaluation screener at Step 4, one test conversation at Step 5, launched mid-week at Step 6.

Recruitment ran primarily through the 4M+ research panel, with 4 share-link interviews layered in from known lost deals. All 28 interviews completed within 60 hours.

The Intelligence Hub surfaced a finding the sales narrative had missed entirely. The team had been losing on what their reps called “feature gaps.” In 19 of 28 conversations, the dominant theme was onboarding experience: buyers did not believe the team could onboard them at the same speed as the named competitor, and the perceived gap was rooted in how each vendor described the first 30 days during the demo. The actual decision driver was implementation confidence. The team rebuilt the demo around a 30-day implementation milestone narrative. Within two quarters, the win rate against that competitor improved by 18%.

“We discovered our main competitor was winning on onboarding experience, not features. Our battlecards were focused on the wrong things entirely. Within two months of adjusting our messaging, win rates improved by 18%.”

Stephane N., Head of Insights and Analytics, Microsoft

What gets the best results from competitive intelligence on User Intuition?


Tips for getting the most out of the workflow, gathered from the patterns that consistently produce competitive movement.

Lean on the panel, not the CRM. For competitive intelligence specifically, recruit recent category buyers from the 4M+ research panel rather than your CRM. This avoids the sales-narrative bias that contaminates lost-deal interviews and captures buyers who never entered your pipeline. Layer in 3 to 5 share-link interviews with known lost deals for deeper context.

Edit the goals to capture the buyer mental model. At Step 3, push past feature-comparison sub-questions and add probes about the consideration set order, the trigger that opened the evaluation, and the emotional drivers underneath the rational decision. Great competitive intelligence surfaces the mental model the buyer used to decide.

Pick voice over chat. Voice transcripts surface tone, hesitation, and emotional emphasis around competitive comparisons that chat compresses out. The 18% win-rate movement in the case study above came from a tonal pattern that a chat interview would have erased.

Run quarterly, not annually. 20 to 30 interviews per quarter at $20/interview on the Pro plan compounds into a sharper picture than one 80-interview annual snapshot. Quarterly perception shift becomes a leading indicator of competitive deal velocity.

One study per primary competitor. Cross-study comparison in the Intelligence Hub is sharper than within-study cohort comparison. If you compete against three named competitors, run three studies and let the Hub do the cross-competitor analysis.

Sync the Intelligence Hub once per quarter. Every completed study should land in the same Competitive Intelligence session. Cross-quarter pattern recognition is the highest-leverage output User Intuition produces, and it only works if every study lands together. See the cost breakdown for continuous CI programs and the comparison of competitive intelligence platforms for the budgeting case behind the quarterly cadence.

Route insights inside 7 days. Findings degrade fast. As soon as the report and the Intelligence Hub are populated, route specific findings to specific owners (sales enablement, product marketing, RevOps) with a defined SLA. This is the same pattern that connects directly to win-loss analysis programs. The studies that move competitive win rate are the ones whose findings reach a decision-maker the same week.

FAQ


The frontmatter FAQ block above answers the most common questions. The full canonical answers also surface inline in the workflow above.

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

Setup takes 10 to 20 minutes from Create Study to Launch if you have your competitor list and buying contexts ready. Charles, the AI researcher inside User Intuition, drafts the objective, background, and learning goals from a short brief. Most teams spend more time recruiting buyers than configuring the study.
No. Product Marketing, Competitive Intelligence specialists, sales enablement, or RevOps leads can run a full study without a dedicated researcher. The AI moderates, synthesizes, and reports. Your job is to brief Charles on the competitor set, edit the learning goals, and review the Intelligence Hub once interviews complete.
For competitive intelligence specifically, lean on the 4M+ panel. Recruiting recent category buyers who never entered your pipeline avoids the sales-narrative bias that contaminates lost-deal interviews. Use the panel for the bulk of the cohort, and add a few share-link interviews with recent lost deals where you want deeper context.
Voice for almost every competitive intelligence study. Voice transcripts capture tone and hesitation around the buyer mental model, surface emotional drivers that chat compresses out, and complete in 10 to 20 minutes without typing fatigue. Chat is the right call only for international cohorts where typing fluency outranks speaking fluency. Audio is the recommended default in the Setup Interview screen.
Fifteen to twenty-five interviews is the floor for credible head-to-head perception. Forty to sixty interviews is where competitor-by-competitor and segment-by-segment comparisons become defensible. The 4M+ research panel makes it practical to run continuous quarterly studies of 20 to 30 interviews instead of one large annual snapshot.
Screener questions sit inside the Setup Interview step and run before the main interview begins. For competitive intelligence, the highest-leverage screener gates to recent category buyers: 'In the last 6 months, did you evaluate {our brand} alongside {competitor X}?' Anyone who answers no drops out before consuming a credit, so every interview is a qualified comparison buyer.
Yes. User Intuition supports 50+ languages, and the country and language selection happens at the Invite Participants step when you build a research panel. For multi-region competitive maps, you can run parallel English, Spanish, French, German, and Japanese cohorts inside a single study and compare verbatim themes side by side in the Intelligence Hub.
On the Professional plan, audio interviews run at $20/interview, which is the marketing headline rate. Studies start at $200, with no monthly minimums and no contract. Twenty interviews per quarter for a continuous competitive intelligence program runs roughly $400, well below a $25K to $100K Crayon or Klue subscription.
Consultants take 4 to 8 weeks per study at $50K to $200K. The User Intuition workflow lands buyer interviews and a synthesized report in 48-72 hours per cohort. Charles handles methodology, the panel handles recruitment, and the Intelligence Hub handles compounding pattern recognition across studies, so you avoid starting over every quarter.
Three places inside User Intuition. The Calls tab holds every transcript and audio recording with quality ratings. The Reports tab holds the AI-synthesized Executive Summary, Top Insights, Detailed Analysis, and Recommendations. The Intelligence Hub aggregates studies over time so you can ask cross-study questions like 'How has buyer language about Competitor X's onboarding evolved across the last three quarters?'
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours