← Insights & Guides · Updated · 16 min read

One-Person Startup Research: Pre-Launch to Post-PMF

By Kevin, Founder & CEO

A one-person startup has one researcher, one product manager, one salesperson, and one CEO. They are all the same person. That means every hour spent on customer research competes directly with building product, closing deals, and keeping the operation running. The question is not whether research matters. It is how to extract maximum learning from minimum time and budget at each stage of growth.

This guide maps the complete research lifecycle for solo founders across four phases: pre-launch, launch, growth, and scale. Each phase has different questions, different methods, different budgets, and different cadences. A founder who matches research activities to their current phase avoids both under-researching (building on assumptions) and over-researching (using research as a form of productive procrastination).

The economics that make this lifecycle practical did not exist five years ago. Traditional research agencies charge $15,000 to $50,000 per study, which means solo founders either skipped research entirely or relied on informal conversations with biased samples. AI-moderated interview platforms have changed the math: $20 per depth interview, results in 48-72 hours, access to a 4M+ participant panel in 50+ languages, and 98% participant satisfaction. User Intuition is built for exactly this use case — letting solo founders run continuous research programs that previously required dedicated insights teams. A solo founder can now run continuous research programs that previously required dedicated insights teams and six-figure budgets.

What Does the Research Lifecycle Look Like for a One-Person Startup?

The research lifecycle for a one-person startup has four phases, each defined by the core question the founder needs answered. The phases are not rigid — some startups move through them in months, others take years — but the sequence is consistent because each phase builds on evidence from the previous one.

Phase 1: Pre-Launch — Does this problem exist, and will people pay to solve it?

Phase 2: Launch — What do early adopters need, and where does the experience break?

Phase 3: Growth — Why do customers leave, what would make them stay, and where should we expand?

Phase 4: Scale — How do we build research infrastructure that sustains learning without the founder doing everything?

Each phase has a recommended budget, a research cadence, and specific study types that produce the highest-leverage insight for that stage. The budgets start at $200 per month and scale to $2,000 per month, reflecting both the increasing complexity of research questions and the increasing revenue that funds them.

The compounding effect is the critical insight. Phase 1 research informs Phase 2 research design. Phase 2 findings shape Phase 3 hypotheses. By Phase 4, you have a longitudinal dataset of customer intelligence that makes every new study faster and more precise. Founders who start researching early build this compounding advantage. Founders who start late spend their first studies discovering things their competitors learned months ago.

Phase 1: Pre-Launch — Does Anyone Actually Want This?

Pre-launch research has one job: prevent you from building something nobody wants. CB Insights found that 42% of startups fail because of no market need. That statistic has not changed in years, and it applies disproportionately to solo founders who lack the team diversity that sometimes catches bad assumptions.

Pre-launch research is the highest-return investment available to a solo founder because a $200 interview study can save twelve months of building on a flawed premise. The three research activities in this phase are problem discovery, demand validation, and pricing research.

Problem Discovery: Confirming the Problem Exists

The first question is not whether your solution is good. It is whether the problem is real. Problem discovery interviews explore the target customer’s current workflow, pain points, and existing workarounds without ever mentioning your product idea.

What to research. Does the target customer experience the problem you believe exists? How painful is it on a scale from mild inconvenience to blocking their core objectives? What have they already tried to solve it? How much time and money do they currently spend on workarounds?

Method. Run 10 AI-moderated depth interviews with recruited target customers. The AI moderator explores the problem space through structured probing without introducing your concept. Participants describe their reality rather than reacting to your pitch.

What good evidence looks like. Seven or more out of ten participants independently describe the problem using similar language and emotional intensity. They reference specific workarounds they have tried and quantify the cost (time, money, or opportunity) of the current state.

What bad evidence looks like. Participants do not recognize the problem until you describe it. They describe it as a minor inconvenience rather than a meaningful pain. Their workarounds are free or low-effort, suggesting the problem is not painful enough to pay to solve.

Budget. Ten interviews at $20 each: $200. This is the single most valuable $200 a pre-launch founder can spend.

Demand Validation: Testing Willingness to Act

Confirming that a problem exists is necessary but not sufficient. Demand validation tests whether target customers would change their behavior — specifically, whether they would switch from their current workaround to a new solution and pay for it.

What to research. Would target customers actively seek a solution? What would trigger them to switch from their current approach? What are their decision criteria? Who else would need to approve the purchase? What competing priorities would this budget come from?

Method. Run 10-15 AI-moderated interviews that begin with problem exploration (building on Phase 1 findings) and then introduce the concept at a high level. Probe for genuine purchase intent, not hypothetical interest. The critical question is not “Would you use this?” but “What would need to be true for you to switch from what you are doing now?”

What good evidence looks like. Participants express active frustration with current solutions. They describe specific trigger events that would make them seek alternatives. They can articulate what they would pay and what budget it would come from.

What bad evidence looks like. Participants say “that sounds interesting” but cannot articulate a switching trigger. They are satisfied with current workarounds. They express interest that is clearly hypothetical rather than operational.

Budget. Fifteen interviews at $20 each: $300. Combined with problem discovery, total Phase 1 spend is approximately $500.

Pricing Research: Establishing Viability

If the problem is real and demand is genuine, the final pre-launch question is whether customers will pay enough to build a viable business. Pricing research conducted before building prevents the common mistake of building a product and then discovering the market will not support the price you need.

What to research. What price anchors do target customers reference? How much do they currently spend on workarounds? At your target price, do they say yes without hesitation, negotiate, or decline? What is the price at which the product becomes a clear decision versus a budget discussion?

Method. Run 10 interviews focused specifically on pricing. Use techniques like Van Westendorp pricing sensitivity or direct willingness-to-pay probing. Present two to three price points and observe reactions. The most revealing question is often: “What would you stop paying for to fund this?”

Budget. Ten interviews at $20 each: $200. Total pre-launch research investment: approximately $700 across all three study types.

Pre-Launch Research Cadence

WeekActivityInterviewsCost
1-2Problem discovery10$200
3-4Demand validation15$300
5-6Pricing research10$200
Total3 studies in 6 weeks35$700

This cadence produces a build, pivot, or kill decision in six weeks for approximately $700. Compare that to the alternative: building for six months on assumptions and discovering the market does not want the product at $50,000+ in sunk development costs.

Phase 2: Launch — What Do Early Adopters Need?

You have launched. Early users are arriving. The research questions shift from “should we build this?” to “how do we make this work for the people using it?” Phase 2 research has three priorities: understanding early adopters, diagnosing onboarding, and prioritizing features.

Early Adopter Interviews: Understanding Your Best Customers

Your first users are not representative of your eventual market. They are early adopters — people willing to tolerate rough edges because the problem is painful enough that an imperfect solution beats no solution. Understanding who they are and why they signed up reveals your actual value proposition, which is often different from what you assumed.

What to research. What motivated them to sign up? Where did they find you? What problem were they trying to solve? What alternatives did they consider? What almost stopped them from signing up?

Method. Run 10-15 AI-moderated interviews with your earliest active users. Focus the first half on their journey to your product (discovery, evaluation, decision) and the second half on their experience using it (first impressions, core value, frustrations).

Key insight to extract. The language early adopters use to describe why they signed up is your most effective acquisition messaging. If they say “I needed a way to understand why customers churn without hiring a research agency,” that is a better headline than anything you would write in a brainstorming session.

Budget. Fifteen interviews at $20 each: $300 per month.

Onboarding Research: Where Does the Experience Break?

Most startups lose 40-60% of signups during onboarding. For a solo founder, every lost user represents wasted acquisition effort. Onboarding research identifies exactly where and why users disengage.

What to research. At which onboarding step do users hesitate or abandon? What are they trying to accomplish at each step? Where does the product’s mental model diverge from the user’s expectations? What information or guidance is missing?

Method. Combine behavioral analytics (Hotjar session recordings) with idea validation methodology applied to the onboarding flow: AI-moderated interviews asking users to walk through their onboarding experience, describe their confusion points, and explain what they expected at each step.

Budget. Ten onboarding-focused interviews at $20 each plus Hotjar free tier: $200 per month.

Feature Prioritization: Building What Matters

Solo founders face a unique version of the prioritization problem. With one developer, one designer, and one PM (all the same person), the cost of building the wrong feature is not just wasted engineering time — it is the opportunity cost of every other feature that did not get built.

What to research. Which capabilities do active users value most? What missing features would increase usage frequency or willingness to pay? What features do users think they want but would not actually change their behavior?

Method. Run 10 interviews focused on feature value. Ask users to describe their workflow, identify where they switch to other tools, and rank potential features by impact on their daily work. Complement with a survey of your broader user base ranking the same features quantitatively.

Budget. Ten interviews at $20 each plus Google Forms survey: $200 per month.

Launch Phase Budget and Cadence

Monthly ActivityInterviewsCost
Early adopter research10-15$200-300
Onboarding research10$200
Feature prioritization10$200
Total30-35$500

Cadence. Rotate between the three study types, running one study per week. Each study uses 8-10 interviews, and results arrive within 48-72 hours. You review findings on Monday and apply them during the week. This weekly rhythm keeps research connected to building decisions rather than existing as a separate workstream.

Phase 3: Growth — Why Do Customers Leave, Stay, and Expand?

You have product-market fit signals: retention is stabilizing, referrals are appearing, and you are spending more time on growth than on survival. Phase 3 research shifts from understanding individual user experiences to understanding market dynamics. The three priorities are churn research, competitive intelligence, and expansion validation.

Churn Research: Diagnosing and Preventing Revenue Loss

Churn is the existential threat for every one-person startup because you cannot outgrow a leaking bucket when you are the only person filling it. Churn research identifies why customers leave and what would make them stay, turning a mystery into a manageable set of fixable problems.

What to research. What was the customer trying to accomplish? What worked well? What disappointed them? Was there a specific trigger event that prompted cancellation? What did they switch to? What would have changed their decision?

Method. Run AI-moderated exit interviews with every churned customer within two weeks of cancellation. The AI moderator explores the full arc from initial expectations through the experience to the cancellation decision. Ten churn interviews per month typically reveal two to three patterns that account for the majority of cancellations.

Budget. Ten churn interviews per month at $20 each: $200.

Impact. Every churn pattern you identify and fix is a permanent improvement to retention. If ten interviews reveal that 40% of churn is caused by a confusing billing page, fixing that page saves months of future churned revenue. The ROI on churn research is among the highest of any research activity.

Competitive Intelligence: Understanding Your Market Position

As you grow, competitors notice. Some are incumbents adjusting their strategy. Others are startups targeting the same opportunity. Competitive intelligence research reveals how target customers perceive your alternatives and what decision criteria drive their choices.

What to research. What alternatives do potential customers evaluate? How do they compare options? What criteria are most important? Where do competitors win and lose? What positioning gaps exist?

Method. Run 15 interviews with prospects who are actively evaluating solutions in your category, including people who chose competitors. Ask them to walk through their evaluation process, describe what each option offered, and explain what tipped their decision.

Budget. Fifteen competitive intelligence interviews per month at $20 each: $300.

Expansion Validation: Where to Grow Next

Growth requires decisions about new segments, new features, and new markets. Expansion validation tests these growth hypotheses before committing development resources.

What to research. Does the adjacent segment have the same problem? Would existing customers pay for this additional feature? Is the international market sufficiently similar to justify localization?

Method. Run idea validation-style interviews with potential customers in the target expansion segment. Use the same methodology from Phase 1 (problem discovery, demand validation) but applied to the expansion hypothesis rather than the original product idea.

Budget. Fifteen expansion validation interviews per month at $20 each: $300.

Growth Phase Budget and Cadence

Monthly ActivityInterviewsCost
Churn research10$200
Competitive intelligence15$300
Expansion validation15$300
Analytics tools (Hotjar Plus, SparkToro)$82
Total40$882

Cadence. Run churn research continuously (every churned customer gets an exit interview). Rotate competitive intelligence and expansion validation studies biweekly. Review all findings in a monthly synthesis that maps patterns across study types.

Phase 4: Scale — Building Research Infrastructure

Scale phase is defined by a transition: from the founder doing all research personally to building systems that sustain research as the company grows. This does not mean stopping founder-led research. It means adding infrastructure so that research continues even when the founder is occupied with fundraising, hiring, or strategic decisions.

When Should You Hire a Researcher?

The decision to hire a dedicated researcher depends on three signals.

Time signal. When research consistently occupies more than 8 hours per week of founder time — including study design, review, synthesis, and action planning — the opportunity cost exceeds the salary cost of a junior researcher.

Volume signal. When you are running more than 40 interviews per month across three or more concurrent studies, coordination and synthesis become a full-time job. AI-moderated platforms handle execution, but interpreting and acting on findings requires focused human attention.

Revenue signal. When revenue supports a $60,000-80,000 salary (typically around $500K-1M ARR for solo-founder startups) without threatening runway. Until this threshold, AI-moderated platforms extend the founder’s research capacity at a fraction of the cost of a full-time hire.

Building a Research Knowledge Base

By Phase 4, you have accumulated dozens of studies across multiple research types. The compounding advantage depends on making previous findings accessible and actionable.

Organize by theme, not chronology. Group findings by topic (churn, onboarding, pricing, competitive) rather than by date. When a new churn question arises, you should be able to find every previous churn-related finding within minutes.

Track longitudinal patterns. Some insights only emerge across multiple studies conducted over months. Churn reasons shift over time. Competitive positioning evolves. Customer language changes as the market matures. A longitudinal view reveals these trends.

Connect research to outcomes. For every study, document what action was taken and what impact resulted. Over time, this creates an evidence base for the ROI of research itself, which is invaluable when justifying research investment to investors, partners, or future hires.

Scale Phase Budget and Cadence

Monthly ActivityInterviewsCost
Ongoing churn and retention15$300
Competitive intelligence15$300
Strategic research (expansion, pricing, brand)20$400
Analytics and intelligence tools$300
Research synthesis and knowledge management$200
Total50$1,500

Cadence. Weekly research reviews. Monthly synthesis reports. Quarterly strategic research planning. The rhythm becomes organizational rather than ad hoc.

How Does the Budget Scale From Pre-Launch to Post-PMF?

PhaseMonthly BudgetInterviews/MonthPrimary Questions
Pre-Launch$20010Does the problem exist? Will people pay?
Launch$50025-30Who are early adopters? Where does onboarding break?
Growth$1,00040Why do customers churn? Where should we expand?
Scale$2,00050+How do we build research infrastructure?

The budget increase from $200 to $2,000 across phases reflects three realities: the questions become more complex, the stakes become higher, and the revenue to fund research has grown. A pre-launch startup spending $200 per month on validation is investing more (as a percentage of resources) than a growth-stage startup spending $2,000 per month.

The critical principle is that research spending should scale with both capability and consequence. Pre-launch research prevents the catastrophic mistake of building the wrong thing. Growth-stage research prevents the expensive mistake of losing customers to fixable problems. Scale-stage research prevents the strategic mistake of expanding in the wrong direction.

What Research Cadence Works for Each Phase?

Research cadence — how often you run studies — matters as much as what you study. Too infrequent and you are making decisions on stale data. Too frequent and research becomes busywork that distracts from execution.

Pre-Launch Cadence: One Study Every Two Weeks

Run one focused study (10-15 interviews) every two weeks. Alternate between problem discovery, demand validation, and pricing research. Review findings the week they arrive and adjust your product concept before starting the next study.

Why this works. Two-week cycles match the speed at which pre-launch hypotheses evolve. You learn something, adjust your thinking, and test the adjusted hypothesis in the next study. Six weeks of this cadence produces more validated learning than six months of building and hoping.

Launch Cadence: One Study Per Week

Run one study per week, rotating between early adopter interviews, onboarding research, and feature prioritization. Each study uses 8-10 interviews and produces results within 48-72 hours through AI-moderated platforms.

Why this works. Weekly research keeps pace with weekly product iterations. Monday you review last week’s findings. Tuesday through Thursday you build based on those findings. Friday you launch the next study. The cycle prevents the common failure mode where research and building operate on disconnected timelines.

Growth Cadence: Continuous With Monthly Synthesis

Run churn research continuously (every churned customer). Run competitive intelligence and expansion studies biweekly. Conduct a monthly synthesis that integrates findings across all study types and maps them to strategic priorities.

Why this works. Growth-stage decisions require pattern recognition across multiple research streams. The monthly synthesis is where churn data, competitive intelligence, and expansion findings converge into strategic clarity.

Scale Cadence: Weekly Reviews With Quarterly Planning

Weekly research reviews keep tactical decisions informed. Quarterly research planning ensures that the research agenda aligns with company strategy and that resources are allocated to the highest-leverage questions.

Why this works. At scale, the risk is not insufficient research but insufficiently connected research. Quarterly planning prevents the accumulation of disconnected studies that produce interesting data without strategic impact.

The Compounding Advantage: Why Starting Early Matters?

The most powerful argument for starting research in Phase 1 is not the $200 you spend on problem discovery. It is the compounding return on every subsequent study.

Phase 1 research teaches you who your target customer is and what language they use to describe their problems. Phase 2 research builds on that foundation to understand how early adopters experience your product. Phase 3 research layers competitive context onto your existing customer understanding. Phase 4 research connects years of accumulated knowledge into strategic intelligence.

Each study is faster because you already know the landscape. Each study is more precise because previous findings narrow the hypothesis space. Each study is more actionable because you have built the context to interpret findings correctly.

This compounding effect is invisible early on. The first study feels slow and uncertain. The tenth study feels like pattern recognition. The fiftieth study produces insight within hours because you know exactly what to look for and what it means.

The solo founders who build this compounding advantage early — starting with a $200 per month research habit in Phase 1 — arrive at Phase 3 with a customer intelligence asset that no amount of funding can replicate quickly. Their competitors who started researching at Phase 3 are spending their first months discovering patterns that the early researcher documented long ago.

This is the durable advantage of consistent research: not any single study, but the accumulated understanding that makes every future decision slightly better informed. Over four phases and twelve to twenty-four months, those marginal improvements compound into a fundamentally different quality of decision-making. And for a one-person startup where every decision matters disproportionately, that compounding effect is the difference between building what the market wants and guessing at it.

The tools exist. AI-moderated interviews at $20 each have removed the cost barrier. The 48-72 hour turnaround has removed the time barrier. A 4M+ participant panel in 50+ languages has removed the access barrier. What remains is the decision to start — and the discipline to continue. The lifecycle described in this guide is not a prescription. It is a framework that adapts to your pace, your market, and your budget. But it only compounds if you begin.

Frequently Asked Questions

Budget scales with phase. Pre-launch: $200 per month for 10 AI-moderated interviews. Launch: $500 per month for 20-25 interviews plus survey tools. Growth: $1,000 per month for 30-40 interviews plus analytics. Scale: $2,000 per month for 50+ interviews plus competitive intelligence tools. The key principle is allocating 2-5% of monthly burn rate to research.
Before writing any code. Pre-launch research is the highest-leverage investment a solo founder can make because it validates the problem before committing resources to a solution. Ten depth interviews at $200 total can save twelve months and tens of thousands of dollars of building something nobody wants.
Three types: problem discovery interviews to confirm the problem exists and is painful enough to solve, demand validation interviews to assess willingness to pay and switching behavior, and pricing research to establish a viable price point. Run 10-15 interviews per research question using AI-moderated platforms for speed and cost efficiency.
Start with free discovery conversations through your network and relevant online communities like Reddit. But recognize that free methods carry significant bias since you self-select participants. As soon as budget allows, invest in structured interviews with recruited target customers. Ten AI-moderated interviews at $20 each cost $200 and produce dramatically more reliable evidence than free alternatives.
Focus on three areas: what made them sign up (acquisition motivation), what almost made them leave during onboarding (friction points), and what they would miss most if the product disappeared (core value identification). These three questions reveal your acquisition message, your onboarding priorities, and your retention anchor.
Run AI-moderated exit interviews with churned customers within two weeks of cancellation. Ask about their original goal, what worked, what disappointed them, and what they switched to. Ten churn interviews typically reveal the two or three patterns driving most cancellations. At $20 per interview, a monthly churn study costs $200 and directly informs retention improvements.
When research consistently takes more than 8 hours per week of your time, when you are running 40+ interviews per month across multiple concurrent studies, or when revenue supports a $60,000-80,000 salary without threatening runway. Most solo founders reach this point between $500K-1M in annual recurring revenue. Until then, AI-moderated platforms extend your capacity.
Minimum 10 per month for continuous learning. Pre-launch, run 10-15 focused on validation. Post-launch, run 15-25 split across onboarding, feature feedback, and churn. Growth stage, run 30-40 covering retention, competitive intel, and expansion. The per-interview cost of $20 with AI-moderated platforms makes these sample sizes economically viable.
Pre-launch research asks whether you should build something: does the problem exist, is demand strong enough, will people pay. Post-launch research asks how to improve what you built: why do users churn, which features matter most, where does onboarding break. The shift is from validating assumptions to optimizing reality.
AI-moderated interviews complement founder-led conversations, they do not replace the instinct you build from hearing customer problems firsthand. Use AI-moderated interviews for systematic research with recruited participants at scale. Continue doing occasional direct conversations for the intuitive understanding that comes from personal interaction. The combination is stronger than either alone.
Block two hours every Monday for research: one hour reviewing results from the previous week's study, one hour designing the next study. Run 10 interviews per cycle using AI-moderated platforms that handle recruitment and moderation automatically. The habit compounds because each study informs the next, reducing setup time and increasing insight quality.
The five most common: skipping pre-launch validation to ship faster, only talking to friends and existing users instead of recruited strangers, doing research once instead of continuously, treating survey data as a substitute for depth interviews, and waiting until a crisis to investigate churn. Each mistake is avoidable with a structured research cadence.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours