Pre-seed validation is the cheapest insurance a solo founder can buy, and most solo founders buy none of it. The reason is not laziness. It is that the validation literature is a tangle of overlapping frameworks (Lean Startup, Mom Test, Jobs-to-be-Done, customer development) that each emphasize one slice of the problem without giving a solo founder a complete sequence to run. This guide fixes that. It lays out the four sequential steps of pre-seed validation, in order, with the exact questions to ask, the sample size needed, and the decision rule at each step.
The thesis is simple. Pre-seed validation has four parts: problem validation, solution validation, pricing and willingness-to-pay validation, and competitive positioning validation. Each step answers a different question. Skipping any one produces a startup that is confident about the wrong thing. A solo founder can run the full sequence in 2 to 3 weeks using AI-moderated interviews on a bootstrapped budget of roughly $200 to $500. For a deeper treatment of the full research motion that surrounds this checklist, see the complete guide to solo founder customer research.
What Is Pre-Seed Validation?
Pre-seed validation is the process of testing a startup hypothesis with real potential customers before raising capital or building a product. It is what happens in the first 3 to 6 months of a company’s life, before there is a prototype, before there is a pitch deck, and often before there is a co-founder. The defining feature is that the test subject is the founder’s thesis, not the founder’s product, because the product does not yet exist.
The term is often used loosely to mean “talking to some customers” but that usage is too soft to be actionable. Pre-seed validation is a structured sequence of four tests. Each test has a specific question, a specific method, a specific sample size, and a specific decision rule. Running the four in order is what separates validated startups from startups that feel validated.
The four tests are sequential, not parallel. Problem validation comes first because if the problem is not real, nothing downstream matters. Solution validation comes next because a real problem that no concept solves cleanly is still a dead end. Pricing comes third because a real problem with a strong concept that nobody will pay for cannot sustain a company. Competitive positioning comes last because even a real problem with a strong concept at a viable price can lose to a well-positioned incumbent in a head-to-head comparison.
This is the sequence. The rest of this guide is the exact script for each step.
Step 1: Problem Validation
Problem validation answers one question: does the problem you think you are solving actually exist for the people you think have it, in the form you think it takes? The test uses 10 to 15 idea validation interviews with ICP participants. Theme saturation, meaning the point at which new interviews stop revealing new themes, typically arrives between interview 8 and interview 12 for a tightly defined ICP. Plan for 15 to be safe.
The method is unprompted problem elicitation. You ask participants to walk you through the last time they tried to do the job your product would help with. You do not describe your idea. You do not hint at the problem. You let them surface it in their own words or not at all. If 70% or more of participants describe the problem without prompting, the problem is real. If fewer than 70% describe it, either the problem is narrower than you thought, the ICP is wrong, or the problem does not exist in the form you assumed.
The five questions are: (1) Walk me through the last time you tried to do [the job]. (2) What made that hard? (3) How did you solve it? (4) How much time, money, or frustration did that cost you? (5) If this happens again next month, what would you do differently? These five cover existence, intensity, current workaround, cost of the status quo, and switching intent. AI-moderated interviews handle this script reliably across 10 to 15 participants in 3 to 5 days, which is the single biggest speed-up over traditional user research for a solo founder working alone.
The decision rule. Proceed to Step 2 if 70% or more of ICP participants describe the problem unprompted and the cost of the status quo is material (time, money, or frustration that they can quantify). Stop or re-scope if fewer than 70% describe the problem, or if the problem exists but participants cannot articulate a meaningful cost. A real problem that costs nothing to live with does not make a startup.
Step 2: Solution Validation
Solution validation answers a different question: does a concept that describes your solution feel like a meaningfully better answer than the participant’s current workaround? The test is a concept test, not a prototype test. You do not need to build anything. A one-page description of what the product does, who it is for, and what it replaces is enough.
Run 10 interviews. Show each participant the concept description, ask them to react in their own words, ask them to compare it to their current workaround, and ask them to describe what would have to be true for them to switch. The exact prompts are: (1) In your own words, what does this product do? (2) Who do you think it is for? (3) How does it compare to how you solve this problem today? (4) What would have to be true for you to switch to it? (5) What is the first thing you would want to try with it?
The decision rule. Proceed to Step 3 if 60% or more of participants describe the concept as a better answer than their current workaround and can name at least one concrete switching condition that is within your control to meet. Re-scope if fewer than 60% prefer the concept. Kill if participants cannot articulate what the concept does after reading the description, which usually means the concept is too vague or the problem statement from Step 1 is wrong.
The common mistake at this step is building a prototype first. A prototype locks in assumptions before validating them, and the founder then spends weeks changing the prototype to fit participant feedback instead of changing the concept. The concept test is faster, cheaper, and more honest because there is no sunk cost.
Step 3: Pricing and Willingness-to-Pay Validation
Pricing validation answers the third question: will participants pay an amount that supports your unit economics? A concept that passes Steps 1 and 2 but fails Step 3 cannot sustain a company. Pricing is also the step that most solo founders skip, which is why so many pre-seed startups discover the pricing problem only after building the product.
The method is a combination of Van Westendorp price sensitivity and direct willingness-to-pay probes. Run 10 interviews after the concept is fixed. Ask: (1) At what price would this be so expensive you would not consider it? (2) At what price would this be expensive but you would still consider it if the value is there? (3) At what price would this be a bargain? (4) At what price would this be so cheap you would question the quality? The intersection of these four points gives you the acceptable price range and the optimal price point. Then ask direct probes: (5) If the price were $X, how likely are you to try it in the next 30 days? (6) What would you need to see to pay that price?
The decision rule. Proceed to Step 4 if the Van Westendorp optimal price lands within your target unit economics and at least 40% of participants say “likely” or “very likely” at that price. Re-scope pricing or the concept if the optimal price is below your unit economics floor. Participants almost always name a lower price than they will eventually pay, so the Van Westendorp optimal is a conservative read, not an optimistic one. See pricing for reference on running this validation at $20 per interview on the Pro plan, or at $0 for the first 3 interviews on the Starter plan.
Step 4: Competitive Positioning Validation
Competitive positioning answers the final question: in a head-to-head comparison against the participant’s closest existing alternative, does your framing win for the specific use case you are targeting? A concept can pass problem, solution, and pricing validation and still lose this step, because the participant defaults to the incumbent the moment both are shown side by side.
The method is a forced comparison. Run 8 to 10 interviews. Show each participant your concept description and two incumbent alternatives (the two options the participant is most likely to consider today). Ask: (1) Which of these three best fits your specific situation, and why? (2) What are you giving up by picking your top choice? (3) If you had to switch from your current solution, which would you switch to, and what is the main reason? (4) What is the single biggest reason the other two are not your top choice?
The decision rule. Proceed to build if 50% or more of participants pick your concept as the best fit for their specific use case and can articulate a reason that is specific, not generic (for example, “it is faster for running weekly pulses on a small team” is specific; “it looks nicer” is not). Re-scope positioning if fewer than 50% pick your concept. The pivot here is usually not the product. It is the positioning: the specific use case, the specific ICP, or the specific framing that makes your concept obviously the right pick for that narrow segment.
This is the step where the sharp wedge emerges. A concept that wins head-to-head for a narrow use case is a much stronger starting point than a concept that loses broadly but almost wins everywhere. Solo founders do not have the capital to compete broadly at pre-seed. The wedge is the only viable strategy.
How to Run the Full Checklist in 2-3 Weeks with AI Moderation
The full sequence takes 2 to 3 weeks end to end for a solo founder running it alone. The timeline: Step 1 (Problem) in days 1 to 5 with 10 to 15 interviews. Step 2 (Solution) in days 6 to 10 with 10 interviews. Step 3 (Pricing) in days 11 to 15 with 10 interviews. Step 4 (Competitive Positioning) in days 16 to 20 with 8 to 10 interviews. The week of buffer at the end is for re-scoping any step that comes back ambiguous.
AI-moderated interviews are what make this timeline feasible for a solo founder. Traditional validation required the founder to schedule, moderate, and transcribe every interview personally, which turned 40 interviews into 6 to 8 weeks of calendar-matching work. AI moderation handles scheduling, fieldwork, moderation, and transcription asynchronously, which collapses the calendar time to 2 to 3 weeks and removes the founder from the bottleneck. On a 4M+ global panel with 48-72 hour fieldwork turnaround and support for 50+ languages, the founder defines the ICP and the script, and the interviews run in parallel.
Total research cost at Pro plan rates ($20 per interview) for a full 40 to 50 interview program: $800 to $1,000. Solo founders starting out can run Step 1 for free using the 3 free interviews on the Starter plan ($0 per month, no credit card), get an early read on problem existence, and upgrade to Pro once the thesis is worth the full program. The combination of free first interviews and $20 per interview on the paid plan is what makes the full four-step checklist accessible to solo founders on a bootstrapped budget who have historically skipped validation because it felt unaffordable.
For the full list of interview questions across all four steps, see the solo founder customer interview questions guide. For the broader context of how pre-seed validation fits into the rest of a solo founder’s research motion, see the complete guide to solo founder customer research. The checklist is the tactical sequence. The surrounding guides are the strategic frame.