← Insights & Guides · 12 min read

How to Run Win-Loss Analysis with User Intuition (Step-by-Step)

By

Win-loss analysis on User Intuition is an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. The product is built so a Revenue Operations lead, a Product Marketing manager, or a CRO can launch a buyer-interview program in an afternoon, then watch transcripts, satisfaction-rated calls, and synthesized reports show up inside the Intelligence Hub within 24-48 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, which means a continuous monthly program is financially viable even at $20M to $80M ARR. This tutorial walks through each of the eight steps with the verbatim labels you will see in the product, the choices that matter most for win-loss specifically, and the links to the underlying documentation.

For the broader methodology (how to design a program, what questions to ask, how to route insights), see the methodology guide. For the headline pricing, the proof points, and the comparison against Clozd and consultants, see the User Intuition win-loss analysis software page. This piece stays in the workflow lane: what to click, where to click it, and why each setting matters when the cohort you care about is closed-won and closed-lost deals.

What is win-loss analysis, briefly?


Win-loss analysis is the systematic practice of interviewing buyers, both ones who chose you and ones who chose a competitor, to understand the real reasons behind their decision. It sits between two failed substitutes: CRM loss reason fields, which reps fill in at deal-close speed and almost always default to “price” or “budget,” and consultant-led studies, which take 4 to 8 weeks and cost $15K to $20K per cohort. Neither produces continuous intelligence. Both produce reports that get filed.

Done well, win-loss analysis answers four questions every revenue team needs answered every quarter. First, what tipped the decision in our favor on the deals we won? Second, what was the actual blocker on the deals we lost (not the surface-level reason)? Third, how is the buying committee describing our category and our competitors right now? Fourth, what changed since last quarter?

The mechanism that makes win-loss work is depth. A 30-minute conversation with a buying-committee member, with 5 to 7 levels of structured probing, surfaces decision drivers that a 2-minute post-deal survey will never reach. Implementation risk perception, champion confidence, time-to-value anxiety, narrative simplicity, vertical credibility, these are the patterns underneath the price excuse. They show up only in conversation.

Modern AI-moderated platforms compress the timeline from weeks to 24-48 hours and the cost from $15K+ per study to $200 floor pricing. That changes the program design. Instead of one big quarterly batch, mid-market teams can run 10 to 15 buyer interviews every month, feed every transcript into a searchable Intelligence Hub, and watch competitive intelligence compound study by study. The workflow below is built around that cadence.

Why doing this on User Intuition is different


Most win-loss tooling makes you choose between depth and speed. Surveys are fast and shallow. Consultants are deep and slow. User Intuition is built to deliver both, and the product flow is shaped by that goal.

The AI moderator runs 30-minute structured laddering conversations with each buyer and probes 5 to 7 levels deep on competitive comparisons, risk perception, and the moments that tipped the decision. Every conversation is run by the same methodology, so a study with 10 interviews and a study with 300 interviews are directly comparable. There is no rep bias, because there is no rep in the room. Buyers report higher candor with AI than with vendor-affiliated researchers, and the 98% participant satisfaction rate reflects how the experience lands.

Recruitment is built in. The 4M+ research panel covers 50+ languages and lets you target by demographics, geography, and segment without building a list. Closed-won and closed-lost deals flow in directly from Salesforce or HubSpot via the Contacts integration, or from a Stripe customer list, or from a unique study link you share with the buying-committee contact. Always-on collection is also a button: drop the embed widget into your post-deal email or your customer portal, and feedback flows continuously.

Synthesis runs in the same product. Once interviews complete, the Reports tab generates an Executive Summary, Top Insights, Detailed Analysis, and Recommendations, all grounded in verbatim buyer quotes. The Intelligence Hub layers across studies, so quarter one and quarter two patterns surface compounding insights that any single-study report would miss. Pricing at $20/interview makes the entire stack sustainable as a continuous program at mid-market scale, where consultant programs typically die after one or two cycles. See the win-loss analysis solution page for the full positioning.

The 8-step walkthrough


The product flow is the same for every study type. The choices below are tuned for win-loss specifically.

Step 1: Create Study

Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. Pick the Win/Loss Analysis card. The selected card highlights with a dark background. Click Save & Continue.

The win-loss template pre-loads the conversation flow, screener defaults, and learning goals tuned for closed-won and closed-lost interviews. You can still customize everything later, but starting from the right shape saves 10 minutes versus Custom Design. The Win/Loss Analysis template is the one you want for any cohort built around “Why did this buyer choose us, or not?”, which includes pure win-loss, competitive displacement studies, and renewal post-mortems.

Win-loss-specific tip: the same template handles wins and losses in one study. Mix both cohorts whenever possible, because the contrast between win narratives and loss narratives is where the highest-leverage intelligence lives. See Choose Your Study Type for the full template comparison.

Step 2: Customize Plan

The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines, all in a panel on the right.

For win-loss, give Charles three things in your opening message. First, the time window: “We are studying closed-won and closed-lost deals from the past quarter.” Second, the segments: “Mid-market B2B SaaS, ACV between $50K and $250K, North America.” Third, the competitors you want to learn about: “Primary competitors are {Competitor A} and {Competitor B}.” Charles will return a research plan that asks about evaluation triggers, decision criteria, the buying committee, and the moments that tipped the decision.

Win-loss-specific tip: tell Charles which questions you already think you know the answer to. He uses your hypotheses to design the probes that test or break them, which keeps the interviews from drifting into ground you have already covered. Read the Customize Plan docs for the full set of context, scope, and objective questions Charles will ask.

Step 3: Review Goals

The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q2 Win-Loss, Mid-Market SaaS”; rename it to fit your internal naming convention). Below that, every section of the plan, Objective through Interviewer Guidelines, is editable. Highlight, retype, add, delete. Changes save automatically.

This is the step where most teams add the buying-committee question. The default win-loss plan asks about the participant’s role and influence, but for enterprise deals you want explicit coverage of the full committee: economic buyer, technical evaluator, end user, procurement. Add a sub-question under Key Sub-Questions like “Who else inside your organization weighed in on this decision, and what were their concerns?” That single question opens up an entire segment of insight that a single-respondent interview otherwise misses.

Win-loss-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Longer plans mean longer interviews mean lower completion rates. The Review Study docs have the full editing toolbar reference. Click Save & Continue when the plan reads the way you want.

Step 4: Setup Interview

The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male, American) and Clara (Female, American). Click the play button on each card to preview the voice. Pick the one whose tone fits your buyer audience; click the card to select it. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for win-loss because voice transcripts are richer and capture tone, hesitation, and emotional emphasis around competitive comparisons that chat will not.

Below the format toggles is the screener questions section. Add the highest-leverage win-loss screener: “Were you involved in the {deal name} purchase decision?” Anyone who answers no drops out before consuming a credit. For multilingual studies, language and country targeting happens later in the Invite Participants step, not here.

Win-loss-specific tip: pick Elliot for buyer audiences that skew male and Clara for audiences that skew female. The match between moderator voice and participant demographic measurably improves rapport and depth. Read the Setup Interviewer docs for the full mode-selection guide.

Step 5: Test Conversation

The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, study introduction, the actual research questions, AI probing on your answers, and a wrap-up. Test conversations do not count against your usage limits, so you can test as many times as you need. After the test, a feedback screen asks “How was the conversation?” with Good or Needs Work options.

For win-loss specifically, run two test conversations before launch. Run the first as a buyer who genuinely loved you and chose you, and listen for whether the AI probes hard enough on what tipped the decision (the AI should not just accept “the demo went well” as a final answer). Run the second as a skeptical lost buyer who chose a competitor on price, and listen for whether the AI probes past the price excuse into implementation risk and champion confidence.

Win-loss-specific tip: if the AI accepts a surface answer, go back to Step 3 and tighten the Interviewer Guidelines with explicit instructions like “Probe at least three levels deep when participants cite price.” Then re-test. The Test Conversation docs cover what to look for during testing.

Step 6: Launch

The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review checklist sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click the Save and Launch button (the one with the rocket icon). Your study goes live immediately. You are redirected to the Study Dashboard.

The detail buyers miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step. This is intentional, because most teams want to set up the study config first, review it with the team, and then trigger recruitment on a specific day.

Win-loss-specific tip: launch on a Wednesday morning, then push recruitment the same day. Buyer response rates skew highest mid-week and decay sharply across weekends, so a Wednesday morning launch gives you 2 to 3 strong response days before the weekend. The Launch docs cover everything that can and cannot be changed post-launch.

Step 7: Invite Participants

This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to win-loss recruitment patterns.

Invite Your Customers lets you import contacts manually, via CSV, or directly from a synced CRM (Salesforce, HubSpot) using the Contacts tab and Segments tab. This is how most teams recruit closed-won and closed-lost deals: pull the past 90 days of closed deals from the CRM as a segment, then send. Research Panel opens recruitment from the 4M+ panel across 50+ languages, with country and language selection happening here for multilingual studies. Use this when the buyer is not in your CRM (competitive displacement studies, deals you lost before they entered your pipeline). Share Link generates a unique URL that you can paste into a personalized email to the buying-committee contact, into a Zapier workflow, or into Stripe receipt emails. Embed Widget drops always-on feedback collection into your customer portal or post-deal email for continuous win-loss intelligence.

Win-loss-specific tip: combine methods. CRM segments for closed-won, panel recruitment for lost-to-competitor cohorts where you do not have buyer contact info. The Recruiting overview covers all four methods side by side.

Step 8: Review Insights

Once interviews complete (the first ones inside hours, full cohorts within 24-48 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row to expand the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; click Regenerate Report as more land.

The third surface is the Intelligence Hub, accessed from the left navigation. Each Hub session is a workspace where you sync studies, upload supplementary files (PDFs, CSVs), and ask natural-language questions across everything in the room. For a continuous win-loss program, this is the compounding layer.

Win-loss-specific tip: create a single Intelligence Hub session called “Win-Loss” at the start of the year, sync every quarterly study into it, and keep adding sources. Then ask questions like “How has buyer language about our implementation risk evolved across the last three quarters?” The session-level pattern recognition is the part that single-study reports cannot replicate. The Intelligence Hub docs and the Reports docs cover the full output surface.

Mini case study


RudderStack ran the workflow above on a cohort of 22 buyers who had recently chosen a competitor over RudderStack on mid-market deals. The team set up the study in a single afternoon: Win/Loss Analysis template selected at Step 1, deal cohort and competitor list briefed to Charles at Step 2, learning goals tightened at Step 3 to include explicit buying-committee mapping for the larger deals, Audio mode and the buying-committee screener configured at Step 4, two test conversations run at Step 5, study launched mid-week at Step 6.

Recruitment used the CRM Contacts integration to import the closed-lost deal list from the past 90 days. All 22 interviews completed within 48 hours.

The Reports tab and the Intelligence Hub surfaced a finding that contradicted the entire sales narrative. Price came up as a primary driver in only 4 of 22 conversations. The dominant theme, present in 14 of 22 transcripts, was implementation risk perception: buyers did not believe RudderStack could onboard them fast enough relative to the competitor. The team rebuilt the sales motion around a 30-day implementation guarantee. Within two quarters, the win rate on mid-market deals improved by 23%.

“We were losing mid-market deals and blaming price. User Intuition interviewed 22 lost buyers in 48 hours. Price came up in only 4 conversations. The real issue was implementation risk perception, buyers didn’t believe we could onboard fast enough.”

Eric O., COO, RudderStack

What gets the best results from win-loss analysis on User Intuition?


Tips for getting the most out of the workflow, gathered from the patterns that consistently produce win-rate movement.

Keep the research plan tight. Aim for 4 to 6 main themes with 2 to 4 sub-questions each at Step 3. Plans with 8+ themes produce interviews longer than 25 minutes and completion rates drop sharply. Twelve total questions or fewer keeps completion above 70% on closed-lost cohorts.

Run two test conversations every time. One as the win narrative, one as the skeptical lost narrative. The single highest-leverage 20 minutes you can spend before launch is catching tone misalignment or shallow probing while it is still cheap to fix.

Pick voice over chat for win-loss. Voice transcripts surface tone, hesitation, and emotional emphasis around competitive comparisons that chat compresses out. Reserve chat for international cohorts where the participant’s language fluency makes typing a better experience.

Mix wins and losses in the same study. The contrast between why deals closed and why they didn’t is the most actionable comparison you can run. A study with 10 wins and 10 losses outperforms a study with 20 losses for almost every revenue use case.

Run continuous, not quarterly. 10 to 15 interviews per month at $20/interview on the Pro plan compounds into a much sharper picture than one 40-interview quarterly batch. The Intelligence Hub gets meaningfully more useful every month you feed it.

Sync the Intelligence Hub once per month. Every completed study should land in the same Win-Loss session inside the Intelligence Hub on a regular cadence. Cross-quarter pattern recognition is the highest-leverage output User Intuition produces, and it only works if every study lands in the same session. See the cost breakdown for continuous programs and the comparison of dedicated win-loss platforms for the budgeting case behind the monthly cadence.

Route insights inside 7 days. Findings degrade fast. As soon as the report and the Intelligence Hub are populated, route specific findings to specific owners (sales enablement, product marketing, RevOps) with a defined SLA. This is the same pattern that connects directly to churn analysis programs, where the same compounding-intelligence logic applies. The studies that move win rate are the ones whose findings reach a decision-maker the same week.

FAQ


The frontmatter FAQ block above answers the most common questions. The full canonical answers also surface inline in the workflow above.

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

Setup takes 10 to 20 minutes from Create Study to Launch if you have your deal cohort and a few learning goals in mind. Charles, the AI researcher inside User Intuition, generates the objective, background, and learning goals from a short brief. Most teams spend more time recruiting buyers than configuring the study.
No. Revenue Operations, Product Marketing, or a sales enablement lead can run a full study without a dedicated researcher. The AI does the moderation, the synthesis, and the report. Your job is to brief Charles on the deal cohort, edit the learning goals, and review the Intelligence Hub once interviews complete.
Yes. The Invite Participants step has a CRM contacts integration and a Segments tab that pulls participant lists directly from Salesforce, HubSpot, or your customer data platform. You can also paste emails manually, upload a CSV, or share a unique study link with specific buying-committee contacts.
Voice for almost every win-loss study. Voice transcripts are richer, capture tone and hesitation around competitive comparisons, and complete in 10 to 20 minutes without typing fatigue. Chat is the right call only for international buyers in non-primary languages or for accessibility-sensitive participants. Audio is the recommended default in User Intuition's setup screen.
Ten to fifteen interviews is the floor for directional patterns across won and lost deals. Twenty to thirty interviews is where competitor-by-competitor and segment-by-segment comparisons become defensible. The 4M+ research panel makes it practical to run continuous monthly studies of 10 to 15 interviews instead of one big quarterly batch.
Screener questions sit inside the Setup Interview step and run before the main interview begins. For win-loss, the highest-leverage screener is involvement in the buying decision. Add a question like 'Were you involved in the {deal name} purchase decision?' so participants without context drop out before consuming a credit.
Yes. User Intuition supports 50+ languages, and the country and language selection happens at the Invite Participants step when you build a research panel. For a multi-region win-loss program, you can run parallel English, Spanish, French, German, and Japanese cohorts inside a single study and compare verbatim themes side by side in the Intelligence Hub.
On the Professional plan, audio interviews run at $20/interview, which is the marketing headline rate. Studies start at $200, with no monthly minimums and no contract. Twenty interviews per month for a continuous win-loss program runs roughly $400, well below a single consultant interview at $1,500 to $2,000.
Consultants take 4 to 8 weeks per study at $15K to $20K. The User Intuition workflow lands buyer interviews and a synthesized report in 24-48 hours per cohort. Charles handles methodology, the panel handles recruitment, and the Intelligence Hub handles compounding pattern recognition across studies, so you avoid starting over every quarter.
Three places inside User Intuition. The Calls tab holds every transcript and audio recording with quality ratings. The Reports tab holds the AI-synthesized Executive Summary, Top Insights, Detailed Analysis, and Recommendations. The Intelligence Hub aggregates studies over time so you can ask cross-study questions like 'How have buyers described our implementation risk across the last three quarters?'
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours