← Insights & Guides · 8 min read

Medical Device User Research: From Concept to Clinical Adoption

By Kevin Omwega, Founder & CEO

Medical device development is an expensive bet. Bringing a Class II device to market costs $30-50 million on average. Class III devices can exceed $100 million. Yet post-launch adoption failure remains common — not because the technology does not work, but because the device does not fit the workflow, the procurement decision-maker was not the persona the team designed for, or clinicians adopt workarounds that negate the intended clinical benefit.

User research reduces these risks. But “user research” in medical devices is not a single activity. It is a portfolio of research programs that span from early needs discovery through years of post-market monitoring, each with different user populations, methodologies, and stakes.

This guide covers the five phases of medical device user research and how to build a cumulative evidence base across all of them.


Phase 1: Pre-Market User Needs Discovery

The most consequential research happens before a single prototype exists. Pre-market needs discovery defines the problem space: what clinical workflow the device will fit into, what unmet needs it addresses, and what constraints the solution must respect.

Understanding the Context of Use

Every medical device operates within a clinical workflow that predates it. A surgical instrument must fit the sequence of steps, the physical environment, the time pressure, and the team dynamics of the operating room. A diagnostic device must integrate with existing information systems, credentialing requirements, and result-reporting workflows.

Needs discovery research maps this context in detail. The methods are observational (shadowing clinicians through their workflow), conversational (in-depth interviews about current practice, frustrations, and workarounds), and analytical (reviewing adverse event databases, clinical guidelines, and competitive product complaints).

Identifying the Real Unmet Needs

Clinicians are excellent at describing their frustrations. They are less reliable at specifying solutions. The researcher’s job is to translate workflow observations and clinician narratives into need statements that are precise enough to guide engineering without prematurely constraining the solution space.

A common trap: asking clinicians “what would you want?” This produces a feature list based on their mental model of current technology. The better approach is to probe around pain points and failure modes: “Walk me through the last time this procedure did not go as planned. What happened? At what point did you realize something was off? What would have had to be different for it to go better?”

At scale, AI-moderated interviews can accelerate needs discovery significantly. Running 100+ conversations with clinicians across specialties, practice settings, and geographies in 48-72 hours produces a needs landscape that would take a traditional research team months to assemble. Platforms like User Intuition apply emotional laddering methodology to probe past initial frustrations to the underlying workflow and decision-making dynamics.

Regulatory Implications

The FDA’s human factors guidance (HF/UE) begins at this stage. Needs discovery feeds directly into the use-related risk analysis that shapes the device’s intended use statement, user profile definitions, and use environment specifications — all of which become regulatory commitments.

Phase 2: Concept Validation

Once engineering has produced concept directions, research shifts to evaluation: which concepts best address the identified needs, and what risks do they introduce?

Early Concept Testing

At the concept stage, the goal is not usability validation — it is desirability and fit assessment. Does this concept address the clinician’s core frustration? Does it introduce new risks? Does it fit the temporal and spatial constraints of the clinical environment?

Concept testing with clinicians can use storyboards, workflow diagrams, or low-fidelity physical models. The interview explores not just whether the concept appeals to the clinician, but whether it integrates with the mental models they use to manage clinical procedures.

Multi-Stakeholder Concept Evaluation

A concept that excites the surgeon may alarm the procurement committee. A device that simplifies the nurse’s workflow may complicate the biomedical engineering team’s maintenance schedule. Effective concept validation includes all stakeholder groups, not just the primary clinical user.

This is where scale becomes important. Testing a concept with 15 surgeons, 15 nurses, 10 procurement officers, and 10 biomedical engineers — across multiple facility types and geographies — requires either a large budget for traditional research or a scalable methodology. AI-moderated interviews can reach these diverse populations efficiently, with each conversation tailored to the stakeholder’s decision context.

Iterating on Feedback

Concept validation is not a gate — it is a loop. Findings from concept testing should feed directly into design iterations. The research protocol should plan for 2-3 rounds of concept refinement, each informed by the previous round’s findings.

Phase 3: Usability Testing

Usability testing for medical devices is both a design tool and a regulatory requirement. It divides into formative studies (during development) and summative validation (before submission).

Formative Usability Studies

Formative studies are conducted iteratively during development to identify use errors and design improvements. They typically involve 5-8 participants per user group per round, using progressively higher-fidelity prototypes.

The critical discipline in formative testing is distinguishing between three types of findings: use errors (the user made a mistake because the design was unclear), close calls (the user almost made a mistake but self-corrected), and difficulties (the user completed the task but found it harder than necessary). Each type implies a different design response.

Formative studies must be conducted in simulated use environments that replicate the relevant clinical conditions — lighting, noise, time pressure, glove use, and the presence of other equipment. A device that tests well in a quiet conference room may fail in a busy ICU.

Summative Usability Validation

Summative testing is the final human factors validation before regulatory submission. It is designed to demonstrate that the device can be used safely and effectively by the intended user populations under the intended use conditions.

Summative testing is not the place for discovery. The design should be locked before summative testing begins. If the summative study reveals critical use errors, the project faces a costly loop back to redesign and re-validation.

The protocol must define success criteria in advance: which tasks are tested, what constitutes a use error versus a difficulty, and what error rate is acceptable for each task. The FDA reviewers will evaluate not just the results but the rigor of the protocol design.

Documentation for Regulatory Submission

Every usability study — formative and summative — must be documented in the human factors engineering report submitted to the FDA. This report traces the evidence chain from user needs analysis through design inputs, formative testing iterations, and summative validation results. Gaps in this chain are the most common human factors deficiency cited in FDA review letters.

Phase 4: Procurement Decision Research

A technically excellent, clinically validated device still has to be purchased. Procurement research investigates the buying process — who is involved, what criteria drive the decision, and where in the evaluation cycle the device wins or loses.

Mapping the Decision Unit

Medical device procurement decisions typically involve 5-8 stakeholders across clinical, administrative, IT, and financial roles. Each has different evaluation criteria and different influence on the final decision. Clinical champions evaluate efficacy and workflow fit. CFOs evaluate total cost of ownership. IT evaluates integration requirements. Value analysis committees evaluate against competing capital priorities.

Research should map this decision unit for each target facility type: who initiates the evaluation, who has veto power, who makes the final approval, and what evidence each stakeholder requires.

Understanding Procurement Criteria

The criteria that procurement teams use often differ from what device companies assume. Technical specifications matter, but they are rarely the deciding factor between competitive products. Service and support reputation, implementation complexity, training requirements, compatibility with existing systems, and vendor stability often weigh more heavily — particularly for devices that require ongoing consumables or software updates.

AI-moderated interviews with procurement decision-makers across 50-100 facilities can reveal these criteria at a scale that transforms anecdotal sales intelligence into systematic market understanding.

Competitive Switching Research

For devices entering an established category, understanding why facilities switch (or do not switch) from incumbent solutions is critical. Switching research probes the specific trigger events that open an evaluation cycle, the competitive evaluation criteria, the implementation risks that make committees hesitate, and the post-switch experience that determines whether the facility recommends the device to peers.

Phase 5: Post-Launch Adoption Tracking

Regulatory clearance and initial sales are the beginning, not the end, of the user research program. Post-launch research tracks whether the device achieves its intended clinical benefit in real-world conditions.

Adoption Curve Research

Device adoption within a facility is not binary. It follows a curve: initial training, tentative first use, growing confidence, routine adoption, and eventually either deep integration or gradual abandonment. Research at each stage reveals different insights.

Early-stage research identifies training gaps and workflow integration challenges. Mid-stage research uncovers the workarounds clinicians develop when the device does not perfectly match their practice patterns. Late-stage research reveals whether the device has achieved its intended clinical benefit or whether usage has drifted from the approved indication.

Longitudinal User Feedback Programs

The most valuable post-launch research programs are continuous. Running 20-30 AI-moderated interviews per quarter with clinicians who use the device provides an ongoing signal of adoption health, emerging use issues, and feature enhancement priorities.

These conversations feed into a cumulative knowledge base — an Intelligence Hub where every interview, every insight, and every trend is searchable, evidence-traced, and cross-referenced. Over time, this knowledge base becomes the organization’s deepest source of clinical user understanding, informing not just the current product’s lifecycle management but the next generation’s development priorities.

Post-Market Surveillance Integration

Post-launch user research complements formal post-market surveillance by providing the qualitative “why” behind quantitative signals. When complaint rates increase in a specific use environment, user research can rapidly investigate the root cause — is it a training issue, a workflow mismatch, a use-environment factor the design did not account for, or a genuine device performance concern?

Building a Cumulative Evidence Base

The five phases of medical device user research are often treated as separate projects, each producing its own report. The result is fragmented knowledge: needs discovery findings get lost during the transition from R&D to commercial; usability study insights do not inform procurement messaging; post-launch feedback does not feed back to the next-generation design team.

The alternative is a cumulative evidence base that connects all five phases. When a post-launch clinician interview references a workflow frustration, that finding links back to the original needs discovery work, the concept trade-offs that were made during development, and the usability testing that validated the chosen design. This traceability transforms a collection of research studies into a coherent, evolving understanding of the device’s relationship with its users.

Building this evidence base is a data architecture problem as much as a research methodology problem. It requires a system that can store structured and unstructured data, link findings across studies and time periods, and make the full evidence base searchable for anyone involved in device development, regulatory affairs, clinical marketing, or post-market monitoring.

Medical device companies that build this capability — continuous, connected, cumulative user research from concept through adoption — make better development decisions, write stronger regulatory submissions, sell more effectively, and manage product lifecycle risks earlier. The research investment compounds across the entire product life.

Frequently Asked Questions

Medical devices typically have multiple user populations: clinicians who operate the device (surgeons, nurses, technicians), patients who receive treatment from or interact with the device, procurement teams who make purchasing decisions, IT and biomedical engineering teams who integrate and maintain the device, and administrators who approve capital expenditure. Effective research accounts for all of these stakeholders, not just the primary clinical user.
Three key differences: regulatory requirements (FDA human factors guidance mandates specific usability validation), the clinical context (devices are used in high-stress, time-critical environments where errors have serious consequences), and the multi-stakeholder purchase decision (the person who uses the device, the person who buys it, and the person who approves the budget are typically three different people with different evaluation criteria).
Before writing the first product requirement. The most common and expensive mistake in medical device development is building a solution before fully understanding the clinical workflow it must integrate into. Pre-market user needs research — understanding the context of use, the existing workflow, the unmet needs, and the failure modes of current solutions — should precede concept development.
The FDA's human factors guidance does not specify a minimum number of participants but recommends that validation testing include a sufficient number to represent each intended user group. In practice, most submissions include 15-25 participants per user group for summative usability testing. Formative studies during development often use smaller samples of 5-8 participants per iteration cycle.
Yes, particularly for needs discovery, concept testing, procurement decision research, and post-launch adoption tracking — phases where the goal is understanding experiences, preferences, and decision processes at scale. AI-moderated interviews can reach 200+ clinicians, procurement officers, or patients in 48-72 hours. Hands-on usability testing with physical prototypes still requires in-person observation.
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

Enterprise

See a real study built live in 30 minutes.

No contract · No retainers · Results in 72 hours