← Reference Deep-Dives Reference Deep-Dive · 5 min read

How to Get Customer Feedback on a New Feature Before You Build It

By Kevin

The most expensive feature feedback is the kind you collect after launch. By that point, you have already spent the engineering cycles, made the tradeoffs, and shipped something that either hits or misses. Pre-build concept testing with real users takes days, not months, and it answers the question that matters most: should we build this at all?

Most SaaS product teams rely on internal conviction, competitive pressure, and feature request volume to decide what to build next. These inputs share a common flaw — they are all filtered through assumptions about what customers actually need. Feature requests tell you what users asked for, not what would change their behavior. Competitive features tell you what rivals built, not whether your users care.

Why post-launch feedback is structurally too late

The problem with gathering feedback after a feature ships is not just that it is slow. It is that organizational dynamics change once code is in production. Teams become attached to what they built. Sunk cost psychology makes it harder to kill or rework a feature that required weeks of engineering effort. Pre-build research inverts this. You talk to users before organizational attachment forms, test multiple concepts without building any of them, and recruit participants based on the problem you are trying to solve.

Concept testing for features: the right structure

Effective pre-build concept testing is not a focus group and it is not a survey. It is a structured conversation that moves through four phases, each designed to surface different kinds of insight.

Phase one: problem validation. Before describing any solution, ask users to walk through their current workflow. Where do they spend time they wish they did not? What workarounds have they built? This phase confirms whether the problem your feature addresses is real, frequent, and painful enough to warrant a behavior change. If users cannot articulate the problem unprompted, your feature is solving something they do not experience.

Phase two: context mapping. Explore the surrounding workflow. What tools are involved? Who else participates? What constraints exist that a new feature would need to respect? This phase reveals the adoption barriers that determine whether a feature gets used even if it works perfectly.

Phase three: concept reaction. Now introduce the feature concept in plain language or with a simple sketch. What would this change about their current process? What concerns come to mind? What would need to be true for them to actually use this? This phase surfaces the gap between theoretical appeal and practical adoption.

Phase four: priority calibration. Place the concept against other improvements the user might want. If they could only have one improvement this quarter, would this be it? This phase prevents building features that users say they want but would not prioritize over other needs.

Recruiting the right users

The quality of your pre-build research depends entirely on who you talk to. Recruiting the wrong participants produces feedback that is technically valid but strategically misleading.

The most common recruitment mistake is talking to your most engaged power users. They have already adapted to your product’s current capabilities and built workarounds. A feature that excites a power user may confuse a typical user.

Instead, recruit participants who represent the target use case rather than the target enthusiasm level. If the feature helps users who struggle with a specific workflow, talk to users who actually struggle with it — including users who have partially disengaged.

AI-moderated interviews are particularly effective here because they remove scheduling barriers. A user who would not block 45 minutes for a live call will often complete a 30-minute voice-based interview at their convenience. Complement first-party recruitment with panel recruitment when you need perspectives from non-users or lapsed users, ensuring you hear from the full range of potential feature adopters.

Question design that avoids leading

The fastest way to get useless pre-build feedback is to lead users toward the answer you want. This happens more often than teams realize, and it does not require overt bias. Subtle framing choices can skew results just as effectively as direct leading questions.

Describing a feature as something that “saves time” before asking whether it would be useful is leading. Asking “Would you use this?” invites a yes because saying no feels impolite. Non-leading design starts with the problem and lets users define its dimensions before any solution enters the conversation. Follow-up questions explicitly invite criticism: “What would make this not worth your time?” “Walk me through a situation where this would not help.”

The laddering methodology is critical here. When a user says “That sounds useful,” the follow-up is “Walk me through specifically how you would use this in your last project.” Grounding reactions in concrete scenarios separates genuine utility from abstract appeal. AI moderators are well-suited to this because they apply the same calibrated methodology to every conversation with no researcher ego invested in the feature concept.

Synthesizing feedback into build, kill, or iterate decisions

Raw interview transcripts are not decisions. The synthesis step — transforming 15-25 conversations into a clear recommendation — is where pre-build research creates or fails to create value.

Effective synthesis organizes findings around three questions. First, is the problem real and widespread enough to justify the investment? Second, does the proposed concept plausibly address the problem within users’ actual workflow constraints? Third, would users prioritize this over other improvements?

These three questions map to three outcomes. Build when the problem is confirmed, the concept fits the workflow, and users would prioritize it. Kill when the problem is not validated or the concept does not fit the workflow context, regardless of abstract appeal. Iterate when the problem is real but the proposed concept needs significant reworking to match how users actually think about the solution space.

The complete guide to customer research for SaaS covers how to integrate these findings into product planning. The key principle is that pre-build research should produce a written recommendation with evidence-traced reasoning — specific quotes from specific conversations that support the conclusion. This makes the finding auditable, debatable, and durable.

Making pre-build research a habit, not a project

The teams that get the most value from pre-build feature research are the ones that make it routine rather than exceptional. When concept testing is a standard step in the development process — as normal as writing a spec or creating a design — it stops feeling like overhead and starts feeling like insurance.

The economics support this. A 20-interview concept test through AI-moderated research costs roughly $400 and takes 3-5 days. A feature that takes two engineers three weeks to build costs orders of magnitude more. If even one in five concept tests reveals a planned feature should be killed or reworked, the program pays for itself many times over.

The compounding effect matters too. Each concept test adds to your understanding of how users think about your product and where the real opportunities for differentiation exist. Over time, this accumulated intelligence sharpens product intuition and reduces the number of concepts that need formal testing. The teams that invest in systematic pre-build research do not just build better features — they build fewer unnecessary ones, which is often the more valuable outcome.

Frequently Asked Questions

Thematic saturation for feature concept testing typically occurs between 12 and 20 conversations. At that point, new interviews confirm existing patterns rather than introducing new themes. For high-stakes features with significant engineering investment, 25-30 conversations provide additional confidence. AI-moderated interviews make it practical to reach these numbers in 48-72 hours.
Start with the problem, not the solution. Ask users to describe their current workflow and where they experience friction before ever introducing the feature concept. When you do present the concept, use neutral language and explicitly invite criticism. Questions like 'What concerns would you have about using this?' and 'What would make this not worth your time?' counterbalance the natural tendency toward politeness.
This is the stated-versus-revealed preference gap. Users may express enthusiasm for a concept in the abstract but not change their behavior when it ships. Effective pre-build research mitigates this by grounding conversations in specific, recent workflow moments rather than hypothetical future behavior. Ask users to walk through the last time they encountered the problem your feature addresses, then evaluate whether the proposed solution would have actually changed their actions.
With AI-moderated interviews and access to a qualified panel, a complete pre-build concept test can run from study design to synthesized findings in 3-5 business days. This fits within a standard sprint cycle, meaning product teams can validate concepts without pausing development timelines. Traditional research methods take 4-8 weeks for the same output.
Both approaches have value at different stages. Early concept validation works well with verbal descriptions or simple sketches because you want to test the idea before investing in design. Once the concept is validated and you are deciding on specific implementations, lightweight prototypes or mockups help users react to concrete interactions rather than abstractions. The critical rule is to match fidelity to the decision you are making.
Get Started

Put This Research Into Action

Run your first 3 AI-moderated customer interviews free — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours