Most competitive intelligence templates are spreadsheets for tracking competitor websites. Feature comparison matrices. SWOT analysis grids. Pricing trackers. These are monitoring tools, and monitoring is not intelligence.
Monitoring tells you what competitors are doing. Intelligence tells you why buyers choose them. The gap between those two activities is the difference between a CI program that produces slide decks and one that changes competitive outcomes.
The templates in this guide are built for the second kind of program. They cover the five components you actually need: a competitive interview discussion guide for structured buyer conversations, a battlecard template populated with buyer evidence instead of marketing assumptions, a quarterly perception tracking framework for measuring how competitive dynamics shift over time, an intelligence distribution matrix that routes the right insights to the right teams, and a 30-day launch timeline for building the entire system from scratch.
If you are looking for a broader strategic overview of competitive intelligence methodology, start with our complete guide to competitive intelligence. If you need to refine the specific questions you ask buyers, see our deep dive on competitive intelligence questions that actually work. This post is about the operational templates — the scaffolding that turns good questions and sound methodology into a repeatable program.
Template 1: Competitive Interview Discussion Guide
The discussion guide is the most important template in your CI program. Everything downstream — battlecards, perception tracking, executive briefings — depends on the quality of the buyer conversations that feed them. A weak discussion guide produces surface-level intelligence. A strong one reaches the emotional and structural drivers that actually determine competitive outcomes.
Guide Structure
A competitive interview discussion guide should move through five phases, each building on the previous one. The goal is to reconstruct the buyer’s decision experience rather than extract opinions about competitors.
Phase 1: Evaluation Context (5 minutes)
These opening questions establish the timeline, trigger event, and consideration set without leading the buyer toward any particular competitor.
| Question | Purpose |
|---|---|
| Walk me through what was happening when you first started looking for a solution. | Surfaces the trigger event and urgency level |
| What did your evaluation process look like from beginning to end? | Establishes timeline and decision structure |
| Who else was involved in the decision internally? | Maps the stakeholder landscape |
| Which solutions made your initial shortlist, and how did that list form? | Identifies consideration set and discovery channels |
Phase 2: Competitor-Specific Probes (10 minutes)
Once the buyer has established context in their own words, probe their perceptions of specific competitors. The key is to ask about experience, not opinion.
| Question | Purpose |
|---|---|
| What was your impression of [Competitor X] before you started evaluating them? | Baseline perception and brand awareness |
| Walk me through your experience evaluating [Competitor X]. | Surfaces demo quality, sales process, and friction points |
| What did [Competitor X] do well during the process? | Identifies competitive strengths from buyer perspective |
| Where did [Competitor X] fall short of what you needed? | Surfaces gaps and weaknesses as experienced, not assumed |
| How did [Competitor X] compare to your expectations going in? | Reveals perception-reality gaps |
Phase 3: Switching Triggers (5 minutes)
These questions identify the specific moments where the buyer’s preference shifted. Switching triggers are the most actionable form of competitive intelligence because they reveal the exact conditions under which buyers move toward or away from specific solutions.
| Question | Purpose |
|---|---|
| Was there a specific moment when you started leaning toward one solution over the others? | Identifies the tipping point |
| What would have needed to be different for you to choose [Competitor X] instead? | Surfaces the decisive gap |
| If you had to describe the one thing that tipped the decision, what would it be? | Forces prioritization among competing factors |
Phase 4: Emotional and Structural Probes (7 minutes)
This is where most competitive interview guides stop, and where the most valuable intelligence lives. These questions reach below the rational justification layer to the emotional and organizational factors that often drive competitive outcomes.
| Question | Purpose |
|---|---|
| At any point during the evaluation, did you have concerns about risk? What did those look like? | Surfaces trust and confidence dynamics |
| How did different stakeholders in your organization react to the options? Was there alignment? | Reveals internal politics and champion dynamics |
| Was there anything about the evaluation that surprised you — something you didn’t expect going in? | Opens space for unscripted disclosure |
| Looking back, is there anything about the process you would do differently? | Retrospective lens often produces the most honest answers |
Phase 5: Future-Looking Close (3 minutes)
| Question | Purpose |
|---|---|
| Six months from now, what would make you reconsider this decision? | Surfaces retention risks and competitive re-entry points |
| Is there anything we haven’t covered that influenced your decision? | Catches intelligence the guide missed |
How AI Handles This Automatically
The discussion guide above is a starting point for human-moderated interviews. The limitation of any static guide is that it cannot adapt to what the buyer actually says. When a buyer mentions a detail that warrants deeper probing, a human interviewer has to make a judgment call about whether to follow that thread or stick to the guide.
AI-moderated interviews solve this by design. The AI interviewer uses structured laddering methodology to follow each response through 5-7 levels of depth, pursuing the emotional and structural layers automatically. When a buyer says something unexpected — a competitor mentioned that was not on the shortlist, an internal dynamic that shaped the outcome — the AI follows up without waiting for a revised guide.
This produces consistent depth across every conversation. A study of 200 competitive interviews does not have 200 different quality levels depending on interviewer fatigue or skill — it has 200 conversations that all reached the same structural depth. That consistency is what makes pattern identification possible at scale.
Template 2: Buyer-Data Battlecard
Most battlecard templates are populated by the product marketing team using competitor websites, pricing pages, and sales team anecdotes. The result is a document that reflects what competitors say about themselves, filtered through what your sales team remembers from deals. That is not a battlecard — it is a guess.
A buyer-data battlecard is populated from actual buyer conversations. Every section below should be filled with evidence from competitive interviews, not internal assumptions.
Battlecard Sections
Section 1: Competitor Overview (From Buyer Perspective)
This is not a summary of the competitor’s website. It is a summary of how buyers actually perceive and describe this competitor.
| Field | Source |
|---|---|
| How buyers describe them | Direct quotes from interviews — the words buyers use unprompted |
| Common first impression | What buyers expect before evaluation starts |
| Perception-reality gap | Where the actual experience differs from the brand perception |
| Typical buyer profile | What kind of buyer gravitates toward this competitor and why |
Section 2: Common Buyer Perceptions
Use actual quotes from buyer interviews. Not paraphrases. Not your team’s interpretation. The exact words buyers used.
| Perception Area | Buyer Quote | Frequency | Source |
|---|---|---|---|
| Product quality | [Direct quote] | X of Y interviews | Study ID / Date |
| Ease of use | [Direct quote] | X of Y interviews | Study ID / Date |
| Pricing / value | [Direct quote] | X of Y interviews | Study ID / Date |
| Support experience | [Direct quote] | X of Y interviews | Study ID / Date |
| Sales process | [Direct quote] | X of Y interviews | Study ID / Date |
Section 3: Switching Triggers
What makes buyers consider alternatives to this competitor? These are the conditions under which this competitor’s customers become available.
| Trigger | Evidence | Actionable For |
|---|---|---|
| [Specific trigger] | [Quote or pattern] | Sales / Marketing / Product |
Section 4: Winning Arguments
What actually resonated in deals you won against this competitor? Not what you think should resonate — what buyers told you mattered.
| Argument | Win Rate Impact | Supporting Evidence |
|---|---|---|
| [Specific message or proof point] | [If measurable] | [Buyer quotes from won deals] |
Section 5: Losing Patterns
Where do you consistently fall short against this competitor? This is the most uncomfortable section and the most valuable one.
| Pattern | Frequency | Root Cause | Status |
|---|---|---|---|
| [Specific loss pattern] | X of Y losses | [Product gap / Positioning gap / Process gap] | [Being addressed / Accepted trade-off / Unknown] |
Section 6: Response Framework
For each common objection or competitive positioning the buyer raises, provide an evidence-backed response.
| When the buyer says… | Respond with… | Evidence |
|---|---|---|
| [Common objection] | [Evidence-backed counter] | [Source: buyer quote, data point, or case study] |
Populating the Battlecard
A single competitive interview study of 30-50 buyers who evaluated your solution against a specific competitor will populate every section of this template. The key is structuring the study to capture the right data:
- Use the discussion guide from Template 1 with competitor-specific probes targeted at the competitor this battlecard covers
- Code responses by section so the analysis maps directly to the battlecard structure
- Update quarterly as new buyer data comes in — a battlecard built on six-month-old interviews is decaying
For teams running continuous competitive research, the User Intuition Customer Intelligence Hub maintains a searchable database of every buyer conversation, making battlecard updates a query rather than a new research project.
Template 3: Quarterly Perception Tracking Framework
A single competitive study gives you a snapshot. Quarterly tracking gives you a trend line. The perception tracking framework runs the same study structure each quarter against the same competitor set, measuring how buyer perceptions shift over time.
This is where competitive intelligence stops being reactive and starts being predictive. When you see a competitor’s trust score climbing for three consecutive quarters, you can act before the pipeline impact is visible in your CRM.
Attributes to Track
Track buyer perceptions across five core dimensions. These should be measured through open-ended interview responses coded to a consistent framework, not through survey scales — open-ended responses surface the reasoning behind perception shifts, not just the direction.
| Dimension | What It Measures | Example Interview Question |
|---|---|---|
| Positioning clarity | How well buyers understand what the competitor does and for whom | ”How would you describe what [Competitor X] does to a colleague?” |
| Trust / credibility | Confidence in the competitor’s ability to deliver | ”How confident were you that [Competitor X] could deliver what they promised?” |
| Value perception | Whether the perceived value justifies the cost | ”Did the investment feel proportional to what you expected to get?” |
| Ease of evaluation | Friction in the buying process itself | ”How easy or difficult was it to evaluate [Competitor X]?” |
| Support / partnership | Perception of ongoing relationship quality | ”What did you expect the post-purchase relationship to look like?” |
Quarter-Over-Quarter Comparison Format
| Dimension | Q1 Score | Q2 Score | Delta | Trend | Notable Shift |
|---|---|---|---|---|---|
| Positioning clarity | [Score] | [Score] | [+/-] | [Up/Down/Stable] | [Key quote or theme driving change] |
| Trust / credibility | [Score] | [Score] | [+/-] | [Up/Down/Stable] | [Key quote or theme driving change] |
| Value perception | [Score] | [Score] | [+/-] | [Up/Down/Stable] | [Key quote or theme driving change] |
| Ease of evaluation | [Score] | [Score] | [+/-] | [Up/Down/Stable] | [Key quote or theme driving change] |
| Support / partnership | [Score] | [Score] | [+/-] | [Up/Down/Stable] | [Key quote or theme driving change] |
Trend Identification Methodology
Not every quarter-over-quarter change is a trend. Use these thresholds:
- Noise: Less than 5% change in a single quarter with no supporting qualitative shift. Do not act.
- Signal: 5-15% change in a single quarter, or less than 5% change for two consecutive quarters in the same direction. Investigate.
- Trend: More than 15% change in a single quarter, or 5%+ change for two consecutive quarters in the same direction. Act.
Alert Triggers
Define the conditions that escalate from routine tracking to active response:
| Trigger | Threshold | Action |
|---|---|---|
| Competitor trust score rising | Two consecutive quarters of increase | Brief product and sales leadership |
| Your positioning clarity dropping | Single quarter drop exceeding 10% | Messaging review with marketing |
| Value perception diverging from competitor | Gap widening for two quarters | Pricing and packaging review |
| New competitor appearing in consideration sets | Mentioned by 10%+ of buyers | Add to tracking framework |
| Competitor not appearing in consideration sets | Dropped below 5% mention rate | Consider removing from active tracking |
Running Quarterly Studies
Each quarterly tracking study should interview 50-100 recent evaluators (buyers who made a decision in the past 90 days) using a consistent discussion guide. The same questions each quarter enable valid comparison. New questions can be added to capture emerging themes, but core tracking questions should remain stable.
The cost of competitive intelligence research has dropped substantially with AI-moderated approaches — running a quarterly tracking study of 100 interviews is operationally feasible even for teams without dedicated research headcount.
Template 4: Intelligence Distribution Matrix
The best competitive intelligence is useless if it reaches the wrong people, in the wrong format, at the wrong time. Most CI programs fail at distribution, not collection. The intelligence distribution matrix defines who gets what, in what format, and on what cadence.
Distribution by Team
Sales
| Intelligence Type | Format | Cadence | Owner |
|---|---|---|---|
| Battlecards | One-page reference per competitor | Updated quarterly | Product Marketing |
| Objection library | Searchable database of evidence-backed responses | Updated as new data arrives | Sales Enablement |
| Competitive alerts | Slack/email notification when significant shift detected | Real-time | CI Program Owner |
| Deal-specific intelligence | Custom briefing for high-value competitive deals | On request | CI Program Owner |
Product
| Intelligence Type | Format | Cadence | Owner |
|---|---|---|---|
| Feature gap analysis | Ranked list of gaps cited in loss interviews, with frequency and revenue impact | Quarterly | Product Marketing |
| Switching trigger data | Summary of conditions that cause competitor customers to evaluate alternatives | Quarterly | CI Program Owner |
| Perception data | How buyers perceive product capabilities vs. competitors | Quarterly | CI Program Owner |
| Buyer workflow insights | How buyers actually use competitive products in practice | As available | Product Marketing |
Marketing
| Intelligence Type | Format | Cadence | Owner |
|---|---|---|---|
| Messaging gaps | Where your messaging does not match buyer language or priorities | Quarterly | CI Program Owner |
| Positioning intelligence | How buyers position you vs. competitors in their own words | Quarterly | CI Program Owner |
| Content opportunities | Competitive topics where buyers seek information and find gaps | Quarterly | Content Lead |
| Competitive narrative shifts | Changes in how competitors are positioning themselves | Monthly | CI Program Owner |
Executive
| Intelligence Type | Format | Cadence | Owner |
|---|---|---|---|
| Strategic competitive briefing | 2-3 page summary of competitive landscape changes with strategic implications | Quarterly | CI Program Owner |
| Competitive risk assessment | Emerging threats with potential revenue impact | Quarterly or as needed | CI Program Owner |
| Market perception trends | Longitudinal view of how market perceives you vs. key competitors | Semi-annually | CI Program Owner |
Distribution Rules
- Route insights to action owners, not information consumers. The product team gets feature gap data because they can act on it. They do not need the full quarterly report.
- Include buyer language in every deliverable. Stakeholders discount abstract findings. Direct quotes create urgency and specificity.
- Attach action items to every distribution. Intelligence without a recommended action is just noise. Every deliverable should answer: “What should the recipient do differently based on this?”
- Set SLAs for action. The distribution matrix should include expected response timelines. A battlecard update should be acknowledged within one week. A strategic risk assessment should generate an action plan within two weeks.
- Track action rates. If a team consistently receives intelligence and takes no action, either the intelligence is not relevant to them or the format is wrong. Both are fixable problems.
The 30-Day CI Program Launch Timeline
Building a competitive intelligence program from scratch does not require six months and a dedicated team. It requires focus, the right templates, and a willingness to start with imperfect data and iterate.
Week 1: Foundation
Goal: Define scope, set up infrastructure, align stakeholders.
| Task | Detail | Output |
|---|---|---|
| Define top 3 competitors | Based on pipeline data — which competitors appear most often in deals | Prioritized competitor list |
| Draft key competitive questions | What do you need to know that you do not currently know? | 5-10 specific questions per competitor |
| Set up competitive monitoring | Google Alerts, competitor blog/changelog subscriptions, G2/Gartner tracking | Monitoring dashboard or channel |
| Identify program owner | Single person accountable for CI program output | Named owner with executive sponsor |
| Map intelligence consumers | Who needs what — use the distribution matrix template | Draft distribution matrix |
| Design first interview study | Use the discussion guide template, targeting recent evaluators | Finalized discussion guide |
Week 2: First Study
Goal: Conduct your first round of competitive buyer interviews and establish a monitoring baseline.
| Task | Detail | Output |
|---|---|---|
| Recruit interview participants | Recent evaluators who considered at least one of your top 3 competitors | 30-50 confirmed participants |
| Launch interview study | AI-moderated or human-moderated, using the discussion guide from Week 1 | Completed interviews within 48-72 hours (AI) or 2-3 weeks (human) |
| Baseline competitor monitoring | Document current competitor positioning, pricing, messaging | Competitor snapshot document |
| Set up analysis coding framework | Define categories that map to battlecard sections | Coding framework document |
With AI-moderated research, the interview phase compresses from weeks to days. A study of 50 buyer interviews can complete in 48-72 hours, giving you raw material for analysis by mid-week.
Week 3: Analysis and Deliverables
Goal: Turn raw interview data into initial battlecards and a competitive report.
| Task | Detail | Output |
|---|---|---|
| Code and analyze interview data | Apply the coding framework to all completed interviews | Coded dataset with themes |
| Build initial battlecards | One per top competitor using the buyer-data battlecard template | 3 battlecards (v1) |
| Draft competitive intelligence report | Key findings, buyer evidence, recommended actions per team | CI report (v1) |
| Identify perception baseline | Score each competitor on the five tracking dimensions | Q1 perception baseline |
| Flag surprises | Findings that contradict internal assumptions | Surprise findings brief |
Week 4: Distribution and Governance
Goal: Distribute first findings and establish the ongoing program cadence.
| Task | Detail | Output |
|---|---|---|
| Sales briefing | Present battlecards, walk through objection responses, gather feedback | Sales-validated battlecards |
| Product briefing | Present feature gap data and switching triggers | Product team action items |
| Marketing briefing | Present messaging gaps and positioning intelligence | Marketing team action items |
| Executive briefing | Present strategic competitive landscape summary | Executive-level competitive brief |
| Establish quarterly cadence | Schedule Q2 tracking study, set recurring briefing dates | Program calendar |
| Set up governance | Define update triggers, escalation paths, action SLAs | CI program governance doc |
By the end of Week 4, you have a functioning competitive intelligence program with live battlecards, a perception baseline for tracking, and a distribution system that routes intelligence to action. It will not be perfect. The first set of battlecards will have gaps. The discussion guide will need refinement based on what you learned. The distribution matrix will need adjustment based on which teams actually used the intelligence.
That is expected. A CI program is a system that improves with every cycle. The goal of the first 30 days is to get the system running — not to produce a definitive competitive analysis.
How Do You Measure CI Program Success?
A competitive intelligence program without measurement is a content production exercise. These five metrics tell you whether your CI program is actually changing competitive outcomes.
Battlecard usage rate. What percentage of competitive deals involve a rep accessing the battlecard? Track this through CRM integration or enablement platform analytics. If you have battlecards and reps are not using them, the problem is either discoverability (they cannot find them), relevance (the content does not match what they face in deals), or trust (the evidence feels stale or internally generated). Target: 70%+ of competitive deals.
Win rate impact. Compare win rates in competitive deals where the battlecard was used versus where it was not. This is the most direct measure of CI program value. A well-populated battlecard built on buyer evidence should produce a measurable win rate lift within two quarters. Track this by competitor — some battlecards will outperform others, revealing where your evidence is strongest and where it needs reinforcement.
Competitive deal flag rate. What percentage of pipeline deals are tagged as competitive, and against which competitors? If reps are not flagging competitive deals in CRM, you have a data quality problem that undermines every other metric. Low flag rates also indicate that the CI program has not made it easy enough for reps to identify and tag competitive situations.
Intelligence freshness score. How old is the buyer evidence in each battlecard? Set a maximum age — 90 days is a good threshold for fast-moving markets, 180 days for slower categories. Battlecards built on stale evidence erode trust. Track the date of the most recent buyer interview feeding each battlecard and flag any that exceed the freshness threshold.
Stakeholder adoption. Beyond sales, which teams are consuming and acting on competitive intelligence? Track the distribution matrix against actual engagement — are product teams incorporating feature gap data into roadmap decisions? Is marketing adjusting messaging based on positioning intelligence? Measure not just whether intelligence is distributed, but whether it produces documented action within the expected SLA.
From Templates to Compounding Intelligence
Templates give you structure. What they cannot give you is scale or memory.
A competitive intelligence program that runs quarterly studies, updates battlecards with buyer evidence, tracks perception shifts over time, and distributes intelligence to the right teams will outperform any competitor monitoring tool. But the operational burden of running that program — recruiting participants, conducting interviews, coding responses, maintaining a searchable intelligence database — is what causes most CI programs to decay after the first quarter.
This is the problem User Intuition was built to solve. AI-moderated competitive interviews complete at scale in 48-72 hours. Every conversation feeds a searchable Customer Intelligence Hub where competitive insights compound over time. Quarterly perception tracking becomes a recurring study rather than a recurring project. Battlecard updates become queries against a growing database of buyer evidence rather than new research efforts.
The templates in this guide work regardless of the tools you use. But if you want to run them at the cadence and scale that produces real competitive advantage, see how User Intuition handles competitive intelligence or start a free trial to run your first competitive study this week.
For related frameworks, see our win-loss analysis template — win-loss and competitive intelligence share methodology, and the best programs run both from the same buyer data.