← Insights & Guides · Updated · 19 min read

Idea Validation Template: A Founder's Playbook

By Kevin, Founder & CEO

A complete idea validation playbook consists of six templates that work as a connected system: a Validation Hypothesis Canvas for structuring and ranking assumptions, a Research Design Checklist for planning who to talk to and how, an Interview Guide Template for running structured 20-25 minute conversations, a Synthesis and Decision Framework for scoring demand and making go/pivot/kill decisions, an Investor Evidence Pack for translating findings into pitch-ready formats, and a Pivot Tracking System for compounding learning across iterations. Each template feeds the next, and together they replace ad-hoc validation with a repeatable research program.

Most founders approach idea validation with fragments of a process: a few customer calls here, a landing page test there, some informal conversations with friends. These activities feel productive but lack the structure to produce reliable evidence. The templates in this playbook address that gap by providing concrete formats, specific questions, and explicit decision criteria at every stage of the validation journey.

Why Do Most Idea Validation Templates Fail?

The internet is full of idea validation templates. Most of them fail for the same reason: they give founders a place to write down their assumptions but no methodology for testing those assumptions against reality.

A typical validation template is a spreadsheet with columns for “Assumption,” “Status,” and “Notes.” The founder fills in assumptions like “small business owners need a better invoicing tool” and then marks them “validated” after three conversations with people who happened to agree. There is no rigor in how the assumption was tested, no threshold for what counts as evidence, and no framework for deciding what to do when the evidence is ambiguous.

The second failure mode is isolation. Templates exist as standalone documents disconnected from the research process. A hypothesis canvas does not link to an interview guide. Interview findings do not feed into a scoring framework. The founder fills in each template independently, losing the compounding value of connected evidence.

The third failure mode is binary thinking. Most templates treat validation as pass/fail: the idea is either validated or it is not. Reality is more nuanced. An idea might be validated for one segment but not another, validated for the problem but not the price point, or validated for the use case but not the channel. Templates that force binary conclusions discard the granularity that makes validation actually useful for decision-making.

The playbook in this guide addresses all three failures. Each template includes explicit methodology for how to fill it in, decision criteria for what the evidence means, and connections to the next template in the sequence. For a deeper exploration of the overall validation process, see the idea validation complete guide.

Template 1 — Validation Hypothesis Canvas

The Validation Hypothesis Canvas is the starting point. It converts vague business ideas into structured, testable statements and prioritizes them by how much they matter and how little you know.

Hypothesis Statement Format

Every assumption about your business idea should be expressed in a standard format:

Problem: [X problem] exists for [Y audience]. Solution: [Z approach] would solve it. Evidence needed: [A, B, C signals] would confirm or disprove this.

For example:

Problem: Mid-market SaaS product managers (Y) struggle to get timely customer feedback before sprint planning (X). Solution: An AI-moderated interview platform that delivers 50+ customer conversations within 48 hours (Z) would solve the timing problem. Evidence needed: (A) Product managers confirm that feedback timing is a top-3 pain point in their workflow, (B) they currently spend more than 2 hours per week on manual workarounds, and (C) they would pay $500+ per month for a solution.

This format forces specificity. “People need better research tools” is not a hypothesis. “Mid-market PM teams averaging 20+ releases per quarter need customer feedback that arrives before sprint planning, not after” is testable.

Assumption Ranking Matrix

Once you have listed 10-20 hypotheses, rank them on two dimensions:

Low UncertaintyHigh Uncertainty
High ImpactMonitor — likely true but verifyTest First — riskiest, most consequential
Low ImpactIgnore — true or false, does not matter muchDefer — uncertain but low stakes

The upper-right quadrant — high impact, high uncertainty — contains the assumptions you must test first. These are the beliefs that would fundamentally change your business if they turned out to be wrong and where you have the least existing evidence.

Canvas Checklist

Use this checklist to verify your canvas is complete:

  • 10-20 hypotheses identified across problem, solution, pricing, and channel
  • Each hypothesis uses the Problem/Solution/Evidence format
  • Each hypothesis is plotted on the impact-uncertainty matrix
  • Top 3-5 hypotheses from the “Test First” quadrant are flagged for immediate research
  • Evidence criteria are specific and measurable, not vague
  • Time limit set: canvas review every 2 weeks as evidence accumulates

The canvas is a living document. As you run interviews and collect evidence, hypotheses move between quadrants. An assumption that started as high-uncertainty might become low-uncertainty after 15 interviews confirm it. The canvas should be updated after every research cycle, creating a visible record of how your understanding has evolved.

Template 2 — Research Design Checklist

The Research Design Checklist translates your prioritized hypotheses into a concrete research plan. It answers four questions: who do you talk to, how many conversations do you need, what do you ask, and where do you find participants.

Target Customer Definition

Define your target customer with enough specificity to write a screener. Vague targets like “startup founders” produce noisy data. Specific targets like “B2B SaaS founders who have raised Series A, have 10-50 employees, and launched their product within the last 18 months” produce focused evidence.

DimensionSpecifics
Role/TitleDecision-maker for the problem area
Company sizeRevenue range, employee count, or stage
IndustryRelevant verticals or exclusions
BehaviorCurrently experiencing the problem (not hypothetically)
RecencyEncountered the problem within last 30-90 days
ExclusionsCompetitors, existing customers, friends and family

Sample Size Guidance

The number of interviews you need depends on your stage and the precision you require:

PurposeSample SizeRationale
Early signal (single hypothesis)10-15Enough to identify whether a pattern exists
Directional confidence (single segment)20-30Thematic saturation for most consumer and B2B topics
Segment comparison (2-4 segments)50-100 total (15-25 per segment)Minimum per-segment count for reliable comparison
Quantitative-adjacent confidence100-200Enables percentage-based claims with defensible sample

Traditional research agencies make larger samples prohibitively expensive at $15,000-$75,000 for a single study. AI-moderated platforms like User Intuition compress the cost to approximately $20 per interview with results in 48-72 hours, making it viable to start with 10-15 interviews for early signal and expand to 50-100 as hypotheses sharpen. The platform supports 50+ languages and draws from a panel of 4M+ participants, so niche segments and international markets are accessible without specialized recruitment.

Screener Criteria

Your screener determines who qualifies for the study. A weak screener lets in participants who do not actually experience the problem, contaminating your data with hypothetical opinions rather than lived experience.

Must-have criteria (disqualify if not met):

  • Currently experiences the target problem (not “has experienced” or “might experience”)
  • Holds decision-making authority or strong influence over relevant purchases
  • Not employed by a direct competitor
  • Not a personal connection of anyone on the founding team

Nice-to-have criteria (use for segment balancing):

  • Specific company size tier
  • Geographic region
  • Technology stack or tool usage
  • Time since last relevant purchase decision

Recruitment Channel Selection

ChannelBest ForLimitations
AI-moderated platform panelSpeed, scale, demographic precisionPanel may not cover ultra-niche B2B roles
LinkedIn outreachSenior B2B decision-makersLow response rates (5-15%), slow
Customer advisory boardExisting customer validationBiased toward current product users
Community forumsNiche enthusiasts, early adoptersSelf-selection bias, limited demographics
Professional associationsIndustry-specific rolesSlow approval processes, formal gatekeeping

For most founders, starting with an AI-moderated platform panel and supplementing with targeted LinkedIn outreach for hard-to-reach segments provides the best balance of speed, cost, and quality.

Research Design Checklist

  • Target customer defined with 5+ specific dimensions
  • Screener criteria written with must-have and nice-to-have tiers
  • Sample size chosen based on research purpose
  • Recruitment channels identified with timeline estimates
  • Budget calculated (interviews x cost per interview)
  • Interview guide drafted (see Template 3)
  • Analysis plan outlined before data collection begins
  • Timeline set: research design to completed synthesis in under 2 weeks

Template 3 — Interview Guide Template

The Interview Guide Template structures a 20-25 minute validation conversation into five stages, each with a specific purpose and time allocation. This structure ensures every interview produces comparable data while leaving room for the unexpected insights that make qualitative research valuable.

Five-Stage Interview Structure

StageDurationPurposeKey Questions
1. Context2-3 minEstablish role, responsibilities, and daily workflow”Walk me through a typical week in your role.” / “What are you primarily responsible for delivering?“
2. Problem Exploration5-7 minUncover pain points, frequency, and emotional weight”What is the most frustrating part of [problem area]?” / “How often does this come up?” / “What happens when it goes wrong?“
3. Current Solutions3-5 minMap existing workarounds, spending, and satisfaction”How do you currently handle this?” / “What do you spend on it — in time and money?” / “What is the biggest gap in your current approach?“
4. Solution Reaction5-7 minTest concept resonance, objections, and perceived value”If a tool did [concept description], how would that change your workflow?” / “What concerns would you have?” / “What would need to be true for you to switch?“
5. Pricing and Commitment3-5 minAssess willingness to pay and urgency signals”What would you expect to pay for something like this?” / “If this were available today, what would your next step be?” / “Who else would need to approve this?”

Core Questions by Stage (Detailed)

Stage 1: Context Setting

The purpose of the context stage is not small talk. It establishes the participant’s credibility as a source of evidence and creates a baseline for interpreting everything that follows.

  • “Tell me about your role and what your team is responsible for.”
  • “Walk me through a typical week — where does most of your time go?”
  • “What metrics or outcomes are you measured on?”

Stage 2: Problem Exploration

This is the most important stage. You are gathering evidence about whether the problem you believe exists actually exists in the participant’s daily life, and if so, how severely.

  • “When you think about [problem area], what is the most frustrating or time-consuming part?”
  • “Can you give me a specific recent example of when this was a problem?”
  • “How often does this come up — daily, weekly, quarterly?”
  • “What happens to the business when this goes wrong or gets delayed?”
  • “On a scale of 1-10, how painful is this relative to other challenges you face?”

Stage 3: Current Solutions

Understanding what participants currently do reveals the competitive landscape, switching costs, and baseline spending.

  • “How do you handle [problem area] today?”
  • “What tools, processes, or people are involved?”
  • “Roughly how much do you spend on this — including time, tools, and personnel?”
  • “What do you wish your current approach did better?”
  • “Have you tried other solutions? What happened?”

Stage 4: Solution Reaction

Introduce your concept simply and neutrally, then probe for genuine reactions rather than politeness.

  • “Let me describe something we are exploring. [30-second concept description.] What is your initial reaction?”
  • “What, if anything, would this change about how you work?”
  • “What concerns or questions come to mind immediately?”
  • “What would make you skeptical that this could actually work?”
  • “How does this compare to alternatives you have seen?”

Stage 5: Pricing and Commitment

The final stage tests whether stated interest translates into economic commitment. This is where false positives die.

  • “Based on the value this would provide, what would you expect to pay?”
  • “If this were available today at [price point], what would you do next?”
  • “Who else at your company would need to be involved in a purchase decision?”
  • “What would prevent you from moving forward even if you wanted to?”
  • “Would you be open to joining a beta program or early access list?”

The Laddering Method

When a participant gives a surface-level answer, use laddering to reach the underlying motivation. The pattern is: statement, then “why” or “tell me more,” repeated 3-5 times.

Participant: “I need something faster.” Interviewer: “What would faster enable you to do?” Participant: “Get results before the leadership meeting.” Interviewer: “Why is that timing important?” Participant: “My VP makes resource decisions in that meeting. If I don’t have data, we lose budget to teams that do.”

The surface answer was “I need something faster.” The real insight is that speed is a proxy for organizational credibility and budget authority. AI-moderated interviews through User Intuition apply this laddering technique consistently at scale, probing 5-7 levels deep across every participant and producing insight depth that manual interviews struggle to maintain over 50+ conversations. With 98% participant satisfaction across 4M+ panelists, the interview quality does not degrade as volume increases.

Interview Guide Checklist

  • Five-stage structure with time allocations totaling 20-25 minutes
  • 3-5 open-ended questions per stage, no leading or binary questions
  • Concept description written as a neutral 30-second script
  • Laddering prompts prepared for each stage
  • Warm-up question does not relate to the hypothesis (reduces priming)
  • Closing includes permission for follow-up and referral request

For detailed question frameworks tailored to idea validation specifically, see the idea validation interview questions guide.

Template 4 — Synthesis and Decision Framework

Raw interview data is valuable only when it is structured into evidence that supports decisions. The Synthesis and Decision Framework converts qualitative conversations into quantified demand scores and explicit go/pivot/kill recommendations.

Demand Scoring Matrix

After completing your interviews, score each participant’s responses across four dimensions:

DimensionScore 1Score 2Score 3Score 4Score 5
Problem SeverityMild inconvenienceNoticeable frictionSignificant pain pointMajor business impactCritical / existential
FrequencyAnnual or lessQuarterlyMonthlyWeeklyDaily
Willingness to PayWould not payWould pay under $50/moWould pay $50-200/moWould pay $200-500/moWould pay $500+/mo
UrgencyNo timelineSomeday / next yearNext quarterNext monthImmediately / actively searching

Score each participant individually, then calculate averages per dimension and per segment. A composite demand score is the average across all four dimensions, expressed as a percentage of maximum (20 points = 100%).

Segment Comparison Table

If you validated across multiple segments, compare them side by side:

MetricSegment ASegment BSegment C
Sample sizen=25n=25n=20
Avg. problem severity4.23.12.8
Avg. frequency4.03.52.2
Avg. willingness to pay3.82.91.9
Avg. urgency3.52.72.1
Composite demand score78%61%45%
RecommendationGoPivotKill

This table makes the decision visible and defensible. Instead of arguing about which segment “feels” more promising, the team can point to structured evidence.

Go / Pivot / Kill Decision Criteria

The demand scoring matrix produces a composite score. Use these thresholds as decision guides:

Go (Composite score above 65%):

  • Problem severity averages 3.5+ (significant pain or higher)
  • At least 60% of participants describe active workarounds
  • Willingness to pay aligns with your target price point
  • Multiple participants express urgency (would act within 30 days)
  • Action: proceed to MVP or prototype phase

Pivot (Composite score 40-65%):

  • Problem exists but severity or frequency is lower than expected
  • Willingness to pay exists but at a different price tier than planned
  • Strong signal in an unexpected segment
  • Core concept resonates but specific solution approach needs rethinking
  • Action: refine hypothesis, redesign for the stronger segment or price point, revalidate

Kill (Composite score below 40%):

  • Problem severity averages below 2.5 (mild inconvenience)
  • Fewer than 30% of participants describe active workarounds
  • Willingness to pay is minimal or hypothetical
  • No urgency signals across the sample
  • Action: archive the hypothesis, document what you learned, move to next idea

These thresholds are guidelines, not absolutes. A composite score of 55% with extremely high urgency in a narrow sub-segment might warrant a targeted go decision rather than a broad pivot. The framework provides structure for the discussion, not a replacement for judgment.

Synthesis Checklist

  • Every interview scored across all four demand dimensions
  • Segment averages calculated and compared
  • Top 10 direct quotes extracted and tagged by theme
  • Pattern analysis completed: what themes appeared in 50%+ of interviews
  • Surprise findings documented (evidence that contradicted hypotheses)
  • Go/pivot/kill recommendation written with supporting evidence
  • Dissenting evidence acknowledged (strongest counterarguments to the recommendation)

Template 5 — Investor Evidence Pack

Investors have seen thousands of pitch decks that claim “we talked to customers.” The Investor Evidence Pack template structures your validation findings into a format that demonstrates rigor, not just activity.

Customer Discovery Slide Template

A single “Customer Discovery” slide in your pitch deck should contain four elements:

1. Research methodology summary (2-3 lines): “We conducted [N] depth interviews with [target customer description] over [timeframe] using [methodology]. Participants were recruited via [channel] and screened for [criteria].”

Example: “We conducted 75 depth interviews with mid-market SaaS product managers over 10 days using AI-moderated interviews through User Intuition. Participants were recruited from a 4M+ global panel and screened for active involvement in customer feedback workflows.”

2. Key demand metrics (3-4 data points):

MetricFinding
Problem recognition (unprompted)82% of participants described the problem without prompting
Active workaround spendingAverage $2,400/month on existing solutions
Willingness to pay at target price68% would pay $200+/month
Urgency (would act within 30 days)41% actively searching for alternatives

3. Direct customer quotes (2-3 quotes): Select quotes that demonstrate the problem in the customer’s own language. The best quotes include specific numbers, named frustrations, or descriptions of failed alternatives. Investors value specificity over enthusiasm.

4. Competitive gap analysis (2-3 sentences): “Current solutions address [X] but fail at [Y]. Participants using [Competitor A] cited [specific limitation]. No existing solution provides [your differentiated capability].”

Evidence Pack Checklist

  • Research methodology described with sample size, timeline, and recruitment method
  • 3-5 quantified demand metrics with specific percentages
  • 5-10 direct customer quotes selected for specificity and credibility
  • Competitive landscape mapped from participant descriptions, not desk research
  • Segment-level breakdown if validation covered multiple segments
  • Methodology credibility signal: explain why interviews, not surveys or landing pages
  • Total cost and timeline documented (demonstrates capital efficiency of your research process)

What Investors Actually Look For

Investors evaluating validation evidence weight three factors:

Specificity over enthusiasm. “82% of participants described the problem unprompted” is stronger than “customers loved the concept.” Numbers, percentages, and direct quotes demonstrate rigor. Adjectives like “excited,” “enthusiastic,” and “interested” suggest the founder is interpreting rather than measuring.

Methodology rigor. Who did you talk to, how did you find them, and how did you prevent bias? Investors who have seen enough startups know that talking to 10 friends who all agreed is not validation. Explaining your screener criteria, recruitment channel, and sample size signals that you understand the difference between signal and noise.

Willingness to pivot. The strongest validation decks include evidence of ideas that were killed or pivoted based on data. This demonstrates that the founder uses evidence to make decisions rather than seeking evidence to confirm decisions already made. The idea validation complete guide covers how to design validation studies that produce honest evidence rather than confirmation bias.

Template 6 — Pivot Tracking System

Most startups pivot at least once. The Pivot Tracking System ensures that each pivot is informed by evidence from the previous iteration, creating a chain of compounding intelligence rather than a series of disconnected experiments.

Hypothesis-Result-Decision Chain

Track each validation cycle as a linked chain:

CycleHypothesisResearchKey FindingDecisionNext Hypothesis
1Mid-market PMs need faster customer feedback30 interviews, mid-market SaaS PMsProblem confirmed but WTP below target at $50-100/moPivot: test enterprise segmentEnterprise research directors need scalable qual research
2Enterprise research directors need scalable qual40 interviews, enterprise insights teamsStrong demand (78% composite), WTP at $500+/moGo: build for enterprise(Move to product development)
3(Post-launch) Enterprise teams want self-serve setup25 interviews, existing enterprise users65% prefer managed service over self-servePivot: managed service modelEnterprise teams will pay premium for full-service research

Each row references the previous one. The “Next Hypothesis” column of cycle 1 becomes the “Hypothesis” column of cycle 2. This creates a visible evidence trail showing how each strategic decision was grounded in customer data.

Pivot Tracking Dashboard

Maintain a running dashboard that shows the health of your validation program:

MetricCurrent CycleCumulative
Total interviews conducted4095
Unique segments tested13
Hypotheses tested26
Hypotheses validated (Go)12
Hypotheses pivoted13
Hypotheses killed01
Total research spend$800$1,900
Average cost per validated insight$400$317

This dashboard serves two purposes. For the founding team, it shows how the understanding of the market has evolved through structured research rather than intuition. For investors, it demonstrates capital-efficient learning: $1,900 in research spend across 95 interviews producing three major strategic decisions is dramatically more efficient than the industry norm of $15,000-$75,000 per validation study through traditional agencies.

Intelligence Compounding

The most powerful aspect of the Pivot Tracking System is what it reveals across cycles. Patterns that are invisible within a single validation study become clear across multiple iterations:

Cross-segment insights. A feature that scored low in Segment A might score high in Segment B. Without tracking across pivots, this insight is lost when the team shifts focus.

Evolving language. How customers describe the problem shifts as you refine your concept description across cycles. Tracking this evolution reveals which framing resonates most naturally.

Competitive dynamics. Participants in Cycle 3 might reference competitors that did not exist during Cycle 1. Tracking competitive mentions over time reveals market velocity.

Price sensitivity trajectories. Willingness to pay often shifts as participants are exposed to more refined concepts. Early cycles produce conservative WTP estimates. Later cycles with more specific concepts often reveal higher WTP.

This compounding effect is why continuous validation programs outperform one-time validation gates. Every study builds on the last. The idea validation solution page describes how AI-moderated platforms make this compounding economically viable by compressing the cost and timeline of each cycle.

Pivot Tracking Checklist

  • Every validation cycle documented with hypothesis, method, finding, and decision
  • Each cycle links explicitly to the previous cycle’s decision
  • Running dashboard updated after every research cycle
  • Cross-cycle patterns reviewed quarterly
  • Kill decisions documented with the same rigor as go decisions
  • Total research spend tracked as an investment metric

How Do You Start Using These Templates Today?

The templates in this playbook are designed to be adopted incrementally. You do not need to implement all six before your first validation interview. Start with two: the Validation Hypothesis Canvas (Template 1) and the Interview Guide Template (Template 3). These two templates alone will produce more structured, reliable validation evidence than most founders generate in months of ad-hoc customer conversations.

Week 1: Hypothesis and Research Design Fill in the Validation Hypothesis Canvas with your top 10-20 assumptions. Plot them on the impact-uncertainty matrix. Select the top 3-5 for immediate testing. Then complete the Research Design Checklist: define your target customer, write your screener, choose your sample size, and identify recruitment channels.

Week 2: Interviews and Early Synthesis Run 20-50 interviews using the Interview Guide Template. If you are using an AI-moderated platform, the interviews and initial transcription happen within 48-72 hours at approximately $20 per interview. Begin scoring each interview using the Demand Scoring Matrix from Template 4 as transcripts become available.

Week 3: Synthesis and Decision Complete the Synthesis and Decision Framework. Calculate segment-level demand scores. Write your go/pivot/kill recommendation with supporting evidence and dissenting data points. If you are preparing for a fundraise, build the Investor Evidence Pack (Template 5) from the same data set. Update the Pivot Tracking System (Template 6) to establish your evidence chain.

This three-week cycle is repeatable. Each iteration sharpens your hypotheses, deepens your understanding of the market, and builds the evidence base that separates funded startups from rejected pitch decks. The cost of running this cycle with AI-moderated interviews through User Intuition is approximately $400-$2,000 depending on sample size, a fraction of the $15,000-$75,000 that traditional research agencies charge for equivalent depth. For a detailed cost breakdown across methods and sample sizes, see the idea validation cost guide.

The compounding effect is what matters most. Founders who run one validation study have data. Founders who run a continuous validation program — updating their hypothesis canvas, refining their interview guides, tracking pivot decisions, and building cumulative evidence — have intelligence. And in markets where every competitor has access to the same public data, proprietary customer intelligence from structured idea validation interviews is the only sustainable advantage that cannot be copied, purchased, or reverse-engineered.

Frequently Asked Questions

Six templates form a complete system: Validation Hypothesis Canvas, Research Design Checklist, Interview Guide Template, Synthesis and Decision Framework, Investor Evidence Pack, and Pivot Tracking System. Each feeds the next in sequence. You can start with the Hypothesis Canvas and Interview Guide alone, then add the others as your validation program matures.
A Validation Hypothesis Canvas structures your assumptions into testable statements using the format: Problem X exists for audience Y, Solution Z would solve it, and Evidence A, B, C would confirm or disprove it. Each hypothesis is then ranked on a 2x2 matrix of impact and uncertainty, so you test the riskiest, most consequential assumptions first.
For early directional signal, 10-30 interviews typically surface core patterns. For segment-level comparison, plan 50-100 interviews across 2-4 segments with 15-25 per segment. AI-moderated platforms like User Intuition make larger samples economically viable at approximately $20 per interview with results in 48-72 hours.
Structure interviews in five stages: Context (role and daily workflow), Problem Exploration (pain points and frequency), Current Solutions (existing workarounds and spending), Solution Reaction (concept feedback and objections), and Pricing Commitment (willingness to pay and switching triggers). Avoid leading questions that telegraph the answer you want.
Use demand scoring across four dimensions: problem severity (1-5), frequency (daily to annual), willingness to pay (stated and behavioral signals), and urgency (nice-to-have versus hair-on-fire). If the composite score falls below 40% of maximum across 20+ interviews, the idea likely needs a kill decision. Between 40-65% suggests pivot territory.
A demand scoring matrix quantifies qualitative interview data across four dimensions: problem severity, frequency, willingness to pay, and urgency. Each dimension is scored 1-5 based on interview evidence, then weighted by segment. The composite score enables apples-to-apples comparison across ideas, segments, and pivot iterations.
Structure your investor evidence pack around four elements: market size evidence derived from interview data, direct customer quotes that demonstrate unmet demand, quantified demand metrics from your scoring matrix, and competitive gaps identified through current-solution analysis. Investors want to see that you talked to real customers, not that you ran a survey.
Yes. The framework applies to both, but weight the dimensions differently. B2B validation emphasizes willingness to pay, budget authority, and procurement complexity. B2C validation emphasizes frequency, emotional intensity, and switching friction. Adjust your screener criteria and interview questions to match the buying context.
With traditional research methods, a full cycle takes 4-8 weeks and costs $15,000-$75,000. With AI-moderated interviews, the same cycle compresses to 5-10 days: 1-2 days for hypothesis and research design, 2-3 days for 50+ interviews via an AI-moderated platform, and 2-3 days for synthesis and decision-making.
A lean canvas maps your business model on a single page. A validation template system tells you how to test each cell of that canvas with real customer evidence. The lean canvas says you have a hypothesis about customer segments. The validation playbook gives you the interview guide, scoring framework, and decision criteria to prove or disprove that hypothesis.
Use a Pivot Tracking System that chains each cycle: Hypothesis, Research Result, Decision (go/pivot/kill), and Next Hypothesis. This creates a visible evidence trail showing how each pivot was informed by real data rather than intuition. It also prevents teams from revisiting previously invalidated directions.
Before. An MVP tests execution of a solution. Validation tests whether the problem is worth solving and whether customers would pay for a solution. Building an MVP without validation risks spending months engineering a solution to a problem that does not exist or that customers would not pay to fix. Validate the problem first, then build the minimum product to test the solution.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours