← Reference Deep-Dives Reference Deep-Dive · 8 min read

5-Question Rapid Validation Framework for Solo Founders

By Kevin, Founder & CEO

Solo founders fail for one of two reasons: they build something no one wants, or they build something people want but will not pay for. Both failures are preventable with 48-72 hours of rapid validation before a line of code is written. The 5-question framework exists because 30-question discussion guides are the wrong tool for the solo-founder problem. Solo founders do not need ethnographic depth. They need a go/no-go decision cheap enough to run weekly against multiple hypotheses, and structured enough that the answer is clear when the interviews come back.

This guide covers the exact 5 questions, the sample size, the decision thresholds, and the post-fielding scoring method. Total cost at User Intuition’s $20 per interview Pro plan pricing runs $200-$300 per hypothesis, or $0 for the first three interviews on the Starter plan. For context on customer-interview question design more broadly, see the complete list of solo-founder customer interview questions. For the broader case on idea validation as a solution category, including how validation fits into the founder workflow, that page covers the strategic layer.

Why the 5-Question Framework Beats 30-Question Discussion Guides for Rapid Validation

The traditional customer discovery interview runs 30-60 minutes, uses a 25-35 question guide, and produces a rich qualitative transcript that takes another 45 minutes per interview to synthesize. At 15 interviews, that is 15 hours of fielding plus 11 hours of analysis, which is a two-week process even for a dedicated researcher. Solo founders do not have two weeks per hypothesis. They have two days before the next idea surfaces.

The 5-question framework compresses the decision loop by stripping the interview down to only the questions that produce a go/no-go answer. Pain frequency establishes whether the problem is real and common enough to matter. Current workarounds establish whether prospects are already motivated to solve it, and what the solution space looks like. Pain cost quantifies whether the problem is expensive enough to justify a paid solution. Ideal solution reveals what prospects already imagine buying, unprimed by your pitch. Willingness to pay anchors the price point before you build and discover it is wrong.

Every other question in a traditional discussion guide, demographic profiling, adjacent-category behavior, competitive awareness, brand perception, is valuable for later-stage research but noise for the build-or-kill decision. Strip it out. The 5 questions are necessary and sufficient for the decision, and the post-fielding scoring takes 15 minutes rather than 11 hours because each question maps to a single decision threshold.

Question 1: Pain Frequency (“Have you experienced [problem]?”)

The exact phrasing matters. Ask: “In the last 30 days, have you experienced [specific problem]?” Not “Do you ever experience” or “Have you had any issues with.” The 30-day window forces recall of recent, specific incidents rather than abstract agreement that the problem exists in the universe.

Decision threshold: at least 60% of your 10-15 interview sample should confirm pain in the last 30 days. Below 60%, the problem is too rare, too niche, or does not exist for this segment. At 60-80%, the problem is real and worth continuing to question 2. Above 80%, the pain is high-frequency and the bottleneck shifts to solution differentiation and willingness to pay, questions 4 and 5.

Watch for hedging. “Sometimes” and “occasionally” are polite variants of no. Count only unambiguous yes responses, ideally with a specific incident the prospect can describe. AI-moderated interviews surface these specifics because the moderator probes automatically when the answer is vague. For a solo founder, recruiting the exact ICP on User Intuition’s 4M+ panel means the 60% threshold is measured against the real audience, not a proxy.

Question 2: Current Workarounds (“What did you do about it?”)

This question is the most diagnostic of the five. If a prospect confirms pain in question 1 but cannot describe a current workaround in question 2, the pain is not real. People who actually experience friction do something about it, even if the something is suboptimal. A prospect who says “I just deal with it” without describing what “dealing with it” looks like is being polite, not truthful.

Real workaround answers sound like this: “I export the data to a spreadsheet and manually cross-reference two systems, which takes me about 40 minutes each week.” That answer gives you three things: the workaround process, the time cost, and an implicit willingness to invest effort in a solution. Vague workarounds, “I google it,” “I ask my team,” “I figure it out”, indicate theoretical pain, not operational pain.

The workaround answers also define the solution space. If 80% of prospects describe the same workaround, your product should replace or streamline that specific workflow. If workarounds are heterogeneous, with 15 different approaches across 15 interviews, the problem space is fragmented and your product needs to bridge multiple workflows, which is a harder positioning challenge. Workaround heterogeneity is a yellow flag, not a kill signal, but it changes how you frame the solution in question 4.

Question 3: Pain Cost (“What did the problem cost you?”)

Ask for a specific number: “How much time or money did this problem cost you in the last 30 days?” Prospects will initially answer with feelings (“it was really frustrating”) rather than numbers. Probe for the number. AI-moderated interviews handle this probing automatically by asking follow-ups like “how many hours specifically” or “roughly what dollar amount.”

The answer does not have to be precise. “Probably 3-4 hours per week” is directionally useful. “Definitely more than $500 a month” is directionally useful. “I’m not sure” repeated twice is a signal that the pain is not quantifiable, which means the prospect will not pay to solve it. Unquantifiable pain is almost always untreatable pain in a business context.

Decision threshold: pain cost should justify a willingness to pay at least 3-5x your intended price. If the cost is $50/month of wasted time and you want to charge $200/month, the math does not work at the prospect level even if the absolute cost is meaningful. Pain cost sets the economic ceiling, and your price should sit comfortably below that ceiling. If pain cost comes back below your target price in question 5, you have a pricing problem before you have a product.

Question 4: Ideal Solution (“What would solve this?”)

Ask: “If you could wave a magic wand and get exactly the solution you want for this problem, what would that look like?” The magic wand framing unlocks unprimed imagination. Prospects who are primed by your pitch will describe your product back to you, which is a contamination problem, not a validation signal. Unprimed prospects describe what they actually want, which is almost always different from what you planned to build.

Three patterns to watch for in answers. First, convergence: if 60%+ of prospects describe the same solution direction, the market has a clear solution expectation and your product should match it. Second, divergence: if answers spread across 5+ different solution visions, the market is fragmented and positioning becomes critical, you have to pick one vision and defend it. Third, orthogonality: if prospects describe a solution that has nothing to do with what you planned to build, you have a misaligned hypothesis and should either pivot the product or pivot the target audience.

This is also where you discover features prospects want that you had not considered. Do not dismiss these as scope creep at the validation stage. Note them. The MVP question is solved later. For now, you are mapping the solution space the market already imagines. For solo founders running multiple hypotheses weekly, the solutions/idea-validation workflow documents how these solution-space maps feed back into the next validation cycle.

Question 5: Willingness to Pay (“What would you pay?”)

Ask: “If a solution existed that delivered [specific outcome from question 4], what would you pay per [unit] to use it?” The unit matters. “Per month” works for subscriptions. “Per use” works for transactional products. “Per user per month” works for B2B SaaS. Anchor the unit to your intended pricing model.

Prospects will anchor low. That is expected. The useful signal is not the specific dollar amount but the distribution. If 80% of answers cluster within a 2-3x range ($50-$150, for example), you have a viable price corridor. If answers range from $5 to $5,000, the market has no pricing expectation and you have pricing discovery work ahead before you can commit to a go-to-market.

Decision threshold: median willingness to pay should be at least 50% of your target price. If you want to charge $200 and the median answer is $100, you have a 50% anchor gap, which is closable with value demonstration in the sales cycle. If the median is $30, the gap is too wide and either your price is wrong or the segment cannot afford you. User Intuition’s pricing page documents the same principle on our side: $20 per interview is not a conversation opener, it is a price point we arrived at through exactly this kind of willingness-to-pay discovery, compressed into a range our target ICP anchors comfortably within.

For new categories where prospects have no reference price, replace the direct question with a substitution question: “What do you currently spend on solving this, in time or tools, and what percentage of that spend would you redirect to a better solution?” The answer derives willingness to pay from existing spend rather than imagined price points, which is more reliable when prospects cannot price-anchor against competitors.

How to Score a 5-Question Study in 15 Minutes Post-Fielding

The scoring method is the reason the framework produces a 48-72 hour decision. Fielding takes 24-48 hours on User Intuition’s platform with the 4M+ panel. Scoring takes 15 minutes because each question maps to a binary decision threshold, and the 5 binary outcomes produce a single go/no-go answer.

Score each question against its threshold. Question 1: pain frequency at or above 60%, yes/no. Question 2: workarounds described with specific process, 60%+ of respondents, yes/no. Question 3: pain cost quantified at 3-5x your target price, yes/no. Question 4: solution direction converges among 60%+ of respondents or orthogonality is below 20%, yes/no. Question 5: median willingness to pay at 50%+ of target price, yes/no.

Go if 4 of 5 questions clear their thresholds. Kill if 2 or more fail. Iterate if exactly 3 clear, which indicates the idea has bones but the positioning, price, or target audience needs refinement before a second validation wave. The iterate outcome is the most common and the most valuable, it tells you exactly which variable to change in the next 48-hour cycle.

Solo founders running this framework weekly on User Intuition’s 4M+ panel at $20 per interview typically validate or kill 2-4 hypotheses per week at a total cost of $400-$1,200. That pace is only possible because the framework is 5 questions, not 30, and because AI-moderated interviews at 48-72 hour turnaround with 98% satisfaction remove the recruiting and fielding bottlenecks that traditional methods impose. The $0 Starter plan with 3 free interviews is enough to pressure-test the framework on your first hypothesis before committing to the Pro plan. G2 5.0 rating confirms the method works across founder profiles, not just ours. The point of the framework is not to be clever, it is to make the go/no-go decision fast enough that you can afford to be wrong and try again.

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

Run 10-15 interviews per hypothesis. Fewer than 10 and you cannot separate signal from noise. More than 15 and you hit diminishing returns for directional go/no-go decisions. Solo founders should treat 10-15 as the sweet spot for the 5-question framework because each interview is 15-20 minutes of analysis, and the goal is a decision in 48-72 hours, not a publishable study. If the initial 15 come back ambiguous, run a second wave with a refined audience rather than expanding the same wave.
At least 60% of prospects should confirm they experienced the problem in the last 30 days. Below 60%, the pain is too infrequent or too niche to build a business on, and you should kill or reframe the idea. Between 60-80% indicates a real but manageable problem where solution direction matters more than pain intensity. Above 80% indicates high-frequency pain where the bottleneck shifts from demand validation to solution differentiation and willingness to pay.
Politeness shows up in question 2 and question 3. If prospects confirm the problem but cannot describe any workaround they currently use, they are likely being agreeable rather than experiencing real pain. If prospects cannot quantify the cost of the problem in time or money, the pain is theoretical, not operational. Real pain always has a current workaround and a quantifiable cost. AI-moderated interviews on platforms like User Intuition reduce politeness bias further because prospects disclose more to a neutral AI moderator than to a founder who obviously wants a yes.
Three options. First, reframe the offering to bundle more value at the target price, but only if the bundle solves adjacent pain that interviews already surfaced. Second, lower the price and test unit economics at the new point. Third, target a different customer segment with higher willingness to pay for the same solution. Do not proceed to build at the original price if 60%+ of prospects anchored below it. Willingness to pay is the hardest number to move post-launch, and price sensitivity problems discovered in week one are survivable while the same problems discovered in month six are usually fatal.
Yes, but adjust question 5. For new categories, willingness to pay becomes speculative because prospects have no reference price. Replace the direct price question with a comparison: what do you currently spend on solving this problem in time, tools, or services, and what percentage of that spend would you redirect to a better solution? The answer gives you an economic ceiling derived from current spend rather than an imagined price point, which is more reliable for category-defining ideas.
Customer discovery interviews are open-ended, exploratory, and designed to surface unknown problems. Rapid validation is targeted, structured, and designed to test a specific hypothesis you already have. Use discovery when you have a market but no idea. Use rapid validation when you have an idea but no evidence. Solo founders typically need rapid validation because they already have a hypothesis, they just do not know if it is worth building.
Surveys miss the two questions that matter most. Question 2 (current workarounds) and question 4 (ideal solution) require open-ended exploration that surveys cannot probe. A prospect who types a two-word workaround in a survey reveals nothing, but the same prospect in a 15-minute AI-moderated interview describes the full workflow, which is the solution space. Surveys also over-index on hypothetical responses because there is no social pressure to be honest. AI-moderated interviews at $20 per interview on User Intuition's Pro plan balance the depth of qualitative with the speed and cost of quantitative.
15-20 minutes. Longer than that and you hit fatigue, especially at 10-15 interviews per wave. The 5-question framework is designed to fit inside 15 minutes of prospect time with room for probing follow-ups. Each core question should get 2-3 minutes, with the remaining 3-5 minutes for the AI moderator to probe on vague answers, confirm specifics, and capture workaround details that prospects often mention in passing. If an interview is running long, the pain is probably real but the solution space is too broad — which is itself a useful validation signal.
Recruit the exact ICP you intend to sell to, not a proxy. Proxies introduce noise that destroys the go/no-go signal. If the idea is for product managers at Series B SaaS companies, do not interview product managers at agencies. User Intuition's 4M+ panel supports this level of targeting with role, company size, industry, and behavioral filters, which is why 10-15 interviews is enough — the recruitment precision compensates for the small sample. Broad recruitment at 15 interviews gives you nothing. Precise recruitment at 15 interviews gives you an answer.
Before. The entire point is to decide whether to build. Running rapid validation after a prototype exists biases you toward defending the build rather than questioning the premise. Solo founders who validate post-prototype almost always find ways to interpret ambiguous signals as positive because sunk cost is already in play. Validate the problem and the price first, then build the solution informed by question 4 (ideal solution) responses, not by your pre-existing architecture preferences.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours