← Insights & Guides · 17 min read

Competitive Intelligence Template: Build a CI Program in 30 Days

By Kevin, Founder & CEO

Most competitive intelligence templates are spreadsheets for tracking competitor websites. Feature comparison matrices. SWOT analysis grids. Pricing trackers. These are monitoring tools, and monitoring is not intelligence.

Monitoring tells you what competitors are doing. Intelligence tells you why buyers choose them. The gap between those two activities is the difference between a CI program that produces slide decks and one that changes competitive outcomes.

The templates in this guide are built for the second kind of program. They cover the five components you actually need: a competitive interview discussion guide for structured buyer conversations, a battlecard template populated with buyer evidence instead of marketing assumptions, a quarterly perception tracking framework for measuring how competitive dynamics shift over time, an intelligence distribution matrix that routes the right insights to the right teams, and a 30-day launch timeline for building the entire system from scratch.

If you are looking for a broader strategic overview of competitive intelligence methodology, start with our complete guide to competitive intelligence. If you need to refine the specific questions you ask buyers, see our deep dive on competitive intelligence questions that actually work. This post is about the operational templates — the scaffolding that turns good questions and sound methodology into a repeatable program.

Template 1: Competitive Interview Discussion Guide


The discussion guide is the most important template in your CI program. Everything downstream — battlecards, perception tracking, executive briefings — depends on the quality of the buyer conversations that feed them. A weak discussion guide produces surface-level intelligence. A strong one reaches the emotional and structural drivers that actually determine competitive outcomes.

Guide Structure

A competitive interview discussion guide should move through five phases, each building on the previous one. The goal is to reconstruct the buyer’s decision experience rather than extract opinions about competitors.

Phase 1: Evaluation Context (5 minutes)

These opening questions establish the timeline, trigger event, and consideration set without leading the buyer toward any particular competitor.

QuestionPurpose
Walk me through what was happening when you first started looking for a solution.Surfaces the trigger event and urgency level
What did your evaluation process look like from beginning to end?Establishes timeline and decision structure
Who else was involved in the decision internally?Maps the stakeholder landscape
Which solutions made your initial shortlist, and how did that list form?Identifies consideration set and discovery channels

Phase 2: Competitor-Specific Probes (10 minutes)

Once the buyer has established context in their own words, probe their perceptions of specific competitors. The key is to ask about experience, not opinion.

QuestionPurpose
What was your impression of [Competitor X] before you started evaluating them?Baseline perception and brand awareness
Walk me through your experience evaluating [Competitor X].Surfaces demo quality, sales process, and friction points
What did [Competitor X] do well during the process?Identifies competitive strengths from buyer perspective
Where did [Competitor X] fall short of what you needed?Surfaces gaps and weaknesses as experienced, not assumed
How did [Competitor X] compare to your expectations going in?Reveals perception-reality gaps

Phase 3: Switching Triggers (5 minutes)

These questions identify the specific moments where the buyer’s preference shifted. Switching triggers are the most actionable form of competitive intelligence because they reveal the exact conditions under which buyers move toward or away from specific solutions.

QuestionPurpose
Was there a specific moment when you started leaning toward one solution over the others?Identifies the tipping point
What would have needed to be different for you to choose [Competitor X] instead?Surfaces the decisive gap
If you had to describe the one thing that tipped the decision, what would it be?Forces prioritization among competing factors

Phase 4: Emotional and Structural Probes (7 minutes)

This is where most competitive interview guides stop, and where the most valuable intelligence lives. These questions reach below the rational justification layer to the emotional and organizational factors that often drive competitive outcomes.

QuestionPurpose
At any point during the evaluation, did you have concerns about risk? What did those look like?Surfaces trust and confidence dynamics
How did different stakeholders in your organization react to the options? Was there alignment?Reveals internal politics and champion dynamics
Was there anything about the evaluation that surprised you — something you didn’t expect going in?Opens space for unscripted disclosure
Looking back, is there anything about the process you would do differently?Retrospective lens often produces the most honest answers

Phase 5: Future-Looking Close (3 minutes)

QuestionPurpose
Six months from now, what would make you reconsider this decision?Surfaces retention risks and competitive re-entry points
Is there anything we haven’t covered that influenced your decision?Catches intelligence the guide missed

How AI Handles This Automatically

The discussion guide above is a starting point for human-moderated interviews. The limitation of any static guide is that it cannot adapt to what the buyer actually says. When a buyer mentions a detail that warrants deeper probing, a human interviewer has to make a judgment call about whether to follow that thread or stick to the guide.

AI-moderated interviews solve this by design. The AI interviewer uses structured laddering methodology to follow each response through 5-7 levels of depth, pursuing the emotional and structural layers automatically. When a buyer says something unexpected — a competitor mentioned that was not on the shortlist, an internal dynamic that shaped the outcome — the AI follows up without waiting for a revised guide.

This produces consistent depth across every conversation. A study of 200 competitive interviews does not have 200 different quality levels depending on interviewer fatigue or skill — it has 200 conversations that all reached the same structural depth. That consistency is what makes pattern identification possible at scale.

Template 2: Buyer-Data Battlecard


Most battlecard templates are populated by the product marketing team using competitor websites, pricing pages, and sales team anecdotes. The result is a document that reflects what competitors say about themselves, filtered through what your sales team remembers from deals. That is not a battlecard — it is a guess.

A buyer-data battlecard is populated from actual buyer conversations. Every section below should be filled with evidence from competitive interviews, not internal assumptions.

Battlecard Sections

Section 1: Competitor Overview (From Buyer Perspective)

This is not a summary of the competitor’s website. It is a summary of how buyers actually perceive and describe this competitor.

FieldSource
How buyers describe themDirect quotes from interviews — the words buyers use unprompted
Common first impressionWhat buyers expect before evaluation starts
Perception-reality gapWhere the actual experience differs from the brand perception
Typical buyer profileWhat kind of buyer gravitates toward this competitor and why

Section 2: Common Buyer Perceptions

Use actual quotes from buyer interviews. Not paraphrases. Not your team’s interpretation. The exact words buyers used.

Perception AreaBuyer QuoteFrequencySource
Product quality[Direct quote]X of Y interviewsStudy ID / Date
Ease of use[Direct quote]X of Y interviewsStudy ID / Date
Pricing / value[Direct quote]X of Y interviewsStudy ID / Date
Support experience[Direct quote]X of Y interviewsStudy ID / Date
Sales process[Direct quote]X of Y interviewsStudy ID / Date

Section 3: Switching Triggers

What makes buyers consider alternatives to this competitor? These are the conditions under which this competitor’s customers become available.

TriggerEvidenceActionable For
[Specific trigger][Quote or pattern]Sales / Marketing / Product

Section 4: Winning Arguments

What actually resonated in deals you won against this competitor? Not what you think should resonate — what buyers told you mattered.

ArgumentWin Rate ImpactSupporting Evidence
[Specific message or proof point][If measurable][Buyer quotes from won deals]

Section 5: Losing Patterns

Where do you consistently fall short against this competitor? This is the most uncomfortable section and the most valuable one.

PatternFrequencyRoot CauseStatus
[Specific loss pattern]X of Y losses[Product gap / Positioning gap / Process gap][Being addressed / Accepted trade-off / Unknown]

Section 6: Response Framework

For each common objection or competitive positioning the buyer raises, provide an evidence-backed response.

When the buyer says…Respond with…Evidence
[Common objection][Evidence-backed counter][Source: buyer quote, data point, or case study]

Populating the Battlecard

A single competitive interview study of 30-50 buyers who evaluated your solution against a specific competitor will populate every section of this template. The key is structuring the study to capture the right data:

  • Use the discussion guide from Template 1 with competitor-specific probes targeted at the competitor this battlecard covers
  • Code responses by section so the analysis maps directly to the battlecard structure
  • Update quarterly as new buyer data comes in — a battlecard built on six-month-old interviews is decaying

For teams running continuous competitive research, the User Intuition Customer Intelligence Hub maintains a searchable database of every buyer conversation, making battlecard updates a query rather than a new research project.

Template 3: Quarterly Perception Tracking Framework


A single competitive study gives you a snapshot. Quarterly tracking gives you a trend line. The perception tracking framework runs the same study structure each quarter against the same competitor set, measuring how buyer perceptions shift over time.

This is where competitive intelligence stops being reactive and starts being predictive. When you see a competitor’s trust score climbing for three consecutive quarters, you can act before the pipeline impact is visible in your CRM.

Attributes to Track

Track buyer perceptions across five core dimensions. These should be measured through open-ended interview responses coded to a consistent framework, not through survey scales — open-ended responses surface the reasoning behind perception shifts, not just the direction.

DimensionWhat It MeasuresExample Interview Question
Positioning clarityHow well buyers understand what the competitor does and for whom”How would you describe what [Competitor X] does to a colleague?”
Trust / credibilityConfidence in the competitor’s ability to deliver”How confident were you that [Competitor X] could deliver what they promised?”
Value perceptionWhether the perceived value justifies the cost”Did the investment feel proportional to what you expected to get?”
Ease of evaluationFriction in the buying process itself”How easy or difficult was it to evaluate [Competitor X]?”
Support / partnershipPerception of ongoing relationship quality”What did you expect the post-purchase relationship to look like?”

Quarter-Over-Quarter Comparison Format

DimensionQ1 ScoreQ2 ScoreDeltaTrendNotable Shift
Positioning clarity[Score][Score][+/-][Up/Down/Stable][Key quote or theme driving change]
Trust / credibility[Score][Score][+/-][Up/Down/Stable][Key quote or theme driving change]
Value perception[Score][Score][+/-][Up/Down/Stable][Key quote or theme driving change]
Ease of evaluation[Score][Score][+/-][Up/Down/Stable][Key quote or theme driving change]
Support / partnership[Score][Score][+/-][Up/Down/Stable][Key quote or theme driving change]

Trend Identification Methodology

Not every quarter-over-quarter change is a trend. Use these thresholds:

  • Noise: Less than 5% change in a single quarter with no supporting qualitative shift. Do not act.
  • Signal: 5-15% change in a single quarter, or less than 5% change for two consecutive quarters in the same direction. Investigate.
  • Trend: More than 15% change in a single quarter, or 5%+ change for two consecutive quarters in the same direction. Act.

Alert Triggers

Define the conditions that escalate from routine tracking to active response:

TriggerThresholdAction
Competitor trust score risingTwo consecutive quarters of increaseBrief product and sales leadership
Your positioning clarity droppingSingle quarter drop exceeding 10%Messaging review with marketing
Value perception diverging from competitorGap widening for two quartersPricing and packaging review
New competitor appearing in consideration setsMentioned by 10%+ of buyersAdd to tracking framework
Competitor not appearing in consideration setsDropped below 5% mention rateConsider removing from active tracking

Running Quarterly Studies

Each quarterly tracking study should interview 50-100 recent evaluators (buyers who made a decision in the past 90 days) using a consistent discussion guide. The same questions each quarter enable valid comparison. New questions can be added to capture emerging themes, but core tracking questions should remain stable.

The cost of competitive intelligence research has dropped substantially with AI-moderated approaches — running a quarterly tracking study of 100 interviews is operationally feasible even for teams without dedicated research headcount.

Template 4: Intelligence Distribution Matrix


The best competitive intelligence is useless if it reaches the wrong people, in the wrong format, at the wrong time. Most CI programs fail at distribution, not collection. The intelligence distribution matrix defines who gets what, in what format, and on what cadence.

Distribution by Team

Sales

Intelligence TypeFormatCadenceOwner
BattlecardsOne-page reference per competitorUpdated quarterlyProduct Marketing
Objection librarySearchable database of evidence-backed responsesUpdated as new data arrivesSales Enablement
Competitive alertsSlack/email notification when significant shift detectedReal-timeCI Program Owner
Deal-specific intelligenceCustom briefing for high-value competitive dealsOn requestCI Program Owner

Product

Intelligence TypeFormatCadenceOwner
Feature gap analysisRanked list of gaps cited in loss interviews, with frequency and revenue impactQuarterlyProduct Marketing
Switching trigger dataSummary of conditions that cause competitor customers to evaluate alternativesQuarterlyCI Program Owner
Perception dataHow buyers perceive product capabilities vs. competitorsQuarterlyCI Program Owner
Buyer workflow insightsHow buyers actually use competitive products in practiceAs availableProduct Marketing

Marketing

Intelligence TypeFormatCadenceOwner
Messaging gapsWhere your messaging does not match buyer language or prioritiesQuarterlyCI Program Owner
Positioning intelligenceHow buyers position you vs. competitors in their own wordsQuarterlyCI Program Owner
Content opportunitiesCompetitive topics where buyers seek information and find gapsQuarterlyContent Lead
Competitive narrative shiftsChanges in how competitors are positioning themselvesMonthlyCI Program Owner

Executive

Intelligence TypeFormatCadenceOwner
Strategic competitive briefing2-3 page summary of competitive landscape changes with strategic implicationsQuarterlyCI Program Owner
Competitive risk assessmentEmerging threats with potential revenue impactQuarterly or as neededCI Program Owner
Market perception trendsLongitudinal view of how market perceives you vs. key competitorsSemi-annuallyCI Program Owner

Distribution Rules

  1. Route insights to action owners, not information consumers. The product team gets feature gap data because they can act on it. They do not need the full quarterly report.
  2. Include buyer language in every deliverable. Stakeholders discount abstract findings. Direct quotes create urgency and specificity.
  3. Attach action items to every distribution. Intelligence without a recommended action is just noise. Every deliverable should answer: “What should the recipient do differently based on this?”
  4. Set SLAs for action. The distribution matrix should include expected response timelines. A battlecard update should be acknowledged within one week. A strategic risk assessment should generate an action plan within two weeks.
  5. Track action rates. If a team consistently receives intelligence and takes no action, either the intelligence is not relevant to them or the format is wrong. Both are fixable problems.

The 30-Day CI Program Launch Timeline


Building a competitive intelligence program from scratch does not require six months and a dedicated team. It requires focus, the right templates, and a willingness to start with imperfect data and iterate.

Week 1: Foundation

Goal: Define scope, set up infrastructure, align stakeholders.

TaskDetailOutput
Define top 3 competitorsBased on pipeline data — which competitors appear most often in dealsPrioritized competitor list
Draft key competitive questionsWhat do you need to know that you do not currently know?5-10 specific questions per competitor
Set up competitive monitoringGoogle Alerts, competitor blog/changelog subscriptions, G2/Gartner trackingMonitoring dashboard or channel
Identify program ownerSingle person accountable for CI program outputNamed owner with executive sponsor
Map intelligence consumersWho needs what — use the distribution matrix templateDraft distribution matrix
Design first interview studyUse the discussion guide template, targeting recent evaluatorsFinalized discussion guide

Week 2: First Study

Goal: Conduct your first round of competitive buyer interviews and establish a monitoring baseline.

TaskDetailOutput
Recruit interview participantsRecent evaluators who considered at least one of your top 3 competitors30-50 confirmed participants
Launch interview studyAI-moderated or human-moderated, using the discussion guide from Week 1Completed interviews within 48-72 hours (AI) or 2-3 weeks (human)
Baseline competitor monitoringDocument current competitor positioning, pricing, messagingCompetitor snapshot document
Set up analysis coding frameworkDefine categories that map to battlecard sectionsCoding framework document

With AI-moderated research, the interview phase compresses from weeks to days. A study of 50 buyer interviews can complete in 48-72 hours, giving you raw material for analysis by mid-week.

Week 3: Analysis and Deliverables

Goal: Turn raw interview data into initial battlecards and a competitive report.

TaskDetailOutput
Code and analyze interview dataApply the coding framework to all completed interviewsCoded dataset with themes
Build initial battlecardsOne per top competitor using the buyer-data battlecard template3 battlecards (v1)
Draft competitive intelligence reportKey findings, buyer evidence, recommended actions per teamCI report (v1)
Identify perception baselineScore each competitor on the five tracking dimensionsQ1 perception baseline
Flag surprisesFindings that contradict internal assumptionsSurprise findings brief

Week 4: Distribution and Governance

Goal: Distribute first findings and establish the ongoing program cadence.

TaskDetailOutput
Sales briefingPresent battlecards, walk through objection responses, gather feedbackSales-validated battlecards
Product briefingPresent feature gap data and switching triggersProduct team action items
Marketing briefingPresent messaging gaps and positioning intelligenceMarketing team action items
Executive briefingPresent strategic competitive landscape summaryExecutive-level competitive brief
Establish quarterly cadenceSchedule Q2 tracking study, set recurring briefing datesProgram calendar
Set up governanceDefine update triggers, escalation paths, action SLAsCI program governance doc

By the end of Week 4, you have a functioning competitive intelligence program with live battlecards, a perception baseline for tracking, and a distribution system that routes intelligence to action. It will not be perfect. The first set of battlecards will have gaps. The discussion guide will need refinement based on what you learned. The distribution matrix will need adjustment based on which teams actually used the intelligence.

That is expected. A CI program is a system that improves with every cycle. The goal of the first 30 days is to get the system running — not to produce a definitive competitive analysis.

How Do You Measure CI Program Success?


A competitive intelligence program without measurement is a content production exercise. These five metrics tell you whether your CI program is actually changing competitive outcomes.

Battlecard usage rate. What percentage of competitive deals involve a rep accessing the battlecard? Track this through CRM integration or enablement platform analytics. If you have battlecards and reps are not using them, the problem is either discoverability (they cannot find them), relevance (the content does not match what they face in deals), or trust (the evidence feels stale or internally generated). Target: 70%+ of competitive deals.

Win rate impact. Compare win rates in competitive deals where the battlecard was used versus where it was not. This is the most direct measure of CI program value. A well-populated battlecard built on buyer evidence should produce a measurable win rate lift within two quarters. Track this by competitor — some battlecards will outperform others, revealing where your evidence is strongest and where it needs reinforcement.

Competitive deal flag rate. What percentage of pipeline deals are tagged as competitive, and against which competitors? If reps are not flagging competitive deals in CRM, you have a data quality problem that undermines every other metric. Low flag rates also indicate that the CI program has not made it easy enough for reps to identify and tag competitive situations.

Intelligence freshness score. How old is the buyer evidence in each battlecard? Set a maximum age — 90 days is a good threshold for fast-moving markets, 180 days for slower categories. Battlecards built on stale evidence erode trust. Track the date of the most recent buyer interview feeding each battlecard and flag any that exceed the freshness threshold.

Stakeholder adoption. Beyond sales, which teams are consuming and acting on competitive intelligence? Track the distribution matrix against actual engagement — are product teams incorporating feature gap data into roadmap decisions? Is marketing adjusting messaging based on positioning intelligence? Measure not just whether intelligence is distributed, but whether it produces documented action within the expected SLA.

From Templates to Compounding Intelligence


Templates give you structure. What they cannot give you is scale or memory.

A competitive intelligence program that runs quarterly studies, updates battlecards with buyer evidence, tracks perception shifts over time, and distributes intelligence to the right teams will outperform any competitor monitoring tool. But the operational burden of running that program — recruiting participants, conducting interviews, coding responses, maintaining a searchable intelligence database — is what causes most CI programs to decay after the first quarter.

This is the problem User Intuition was built to solve. AI-moderated competitive interviews complete at scale in 48-72 hours. Every conversation feeds a searchable Customer Intelligence Hub where competitive insights compound over time. Quarterly perception tracking becomes a recurring study rather than a recurring project. Battlecard updates become queries against a growing database of buyer evidence rather than new research efforts.

The templates in this guide work regardless of the tools you use. But if you want to run them at the cadence and scale that produces real competitive advantage, see how User Intuition handles competitive intelligence or start a free trial to run your first competitive study this week.

For related frameworks, see our win-loss analysis template — win-loss and competitive intelligence share methodology, and the best programs run both from the same buyer data.

Frequently Asked Questions

A complete CI template should include five components: (1) a competitive interview discussion guide for buyer conversations, (2) a battlecard template populated with buyer language rather than marketing assumptions, (3) a quarterly tracking framework for measuring perception shifts, (4) an intelligence distribution matrix showing which teams get which insights, and (5) a program governance template for cadence, responsibilities, and escalation.
Start with the competitive question you're answering (e.g., 'Why are we losing to Competitor X in the mid-market?'), present the buyer evidence (quotes, patterns, perception data), map the findings to action items per team (product, sales, marketing, executive), and include a comparison to previous quarters if you have longitudinal data. The best CI reports are structured for action, not just information.
A CI framework is the structured approach your organization uses to collect, analyze, distribute, and act on competitive insights. Common frameworks include the Intelligence Cycle (plan, collect, analyze, disseminate), Porter's Five Forces for strategic CI, and buyer-centric frameworks that organize intelligence around how customers perceive and choose between competitors.
Week 1: Define your top 3 competitors and key competitive questions. Week 2: Set up monitoring (automated alerts) and run your first buyer interview study. Week 3: Analyze results and create initial battlecards. Week 4: Distribute findings to sales, product, and marketing. Then establish a quarterly cadence for buyer perception tracking.
A battlecard is a tactical sales tool — a one-page reference for handling specific competitor objections in live deals. A competitive intelligence report is a strategic document covering broader competitive dynamics, perception trends, and cross-team recommendations. Battlecards are derived from CI reports but optimized for real-time sales use.
Battlecards should be updated quarterly (aligned with buyer perception tracking). The CI program framework should be reviewed semi-annually. Interview discussion guides should evolve based on what you learn — add follow-up probes for emerging themes and retire questions that consistently produce low-value answers.
Start by solving a problem they already have. Ask your top reps which competitor they lose to most often and why — then run a focused buyer interview study on that competitor and deliver a battlecard within two weeks. When sales sees evidence-backed talk tracks that help them win deals they were losing, adoption follows. The worst approach is launching a CI program with a company-wide announcement and a Confluence page.
A CI briefing should cover five elements: (1) what changed since the last briefing — new competitor moves, positioning shifts, or product launches, (2) buyer perception data — how customers actually describe each competitor, (3) win/loss patterns — where you are gaining or losing ground and why, (4) recommended actions — specific changes for sales, product, or marketing based on the evidence, and (5) what to watch — emerging signals that are not yet trends but warrant attention.
Track four metrics: (1) competitive win rate — are you winning more deals against tracked competitors over time, (2) battlecard usage rate — what percentage of competitive deals reference the battlecard, (3) time-to-intelligence — how quickly new competitive insights reach the teams that need them, and (4) action rate — what percentage of CI recommendations result in measurable changes to sales messaging, product roadmap, or marketing positioning.
CI templates are tool-agnostic by design — the structure matters more than the software. That said, common integrations include CRM systems (Salesforce, HubSpot) for tagging competitive deals and tracking win rates, communication tools (Slack, Teams) for distributing alerts, content management platforms for hosting battlecards, and AI-moderated interview platforms like User Intuition for generating the buyer evidence that populates the templates.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours