How CHG Healthcare Found Their Physician Walk-Away Problem Was Never About Money
CHG Healthcare's market research team partnered with User Intuition to investigate why physicians abandon locum tenens assignments after accepting — and found the answer hiding in operational details, not competitor pay.
A Business Problem Nobody Could Fully Explain
CHG Healthcare — one of the largest healthcare staffing companies in the US — had a physician retention problem that was hiding in plain sight. Physicians were accepting locum tenens assignments, then walking away before they started. The pattern was consistent enough to hurt, and the impact landed most directly on sales.
The working hypothesis was intuitive: physicians must be leaving for competitors offering more money. The question under the question, as the team framed it, was whether competing firms were simply paying more and pulling physicians away before they ever started. It made sense on paper. Locum tenens is a competitive market, compensation is a primary driver for most physicians, and CHG's sales team had seen enough walk-aways to make the pattern feel obvious.
But intuition isn't insight. And acting on the wrong diagnosis — doubling down on compensation when that wasn't actually the issue — would have meant solving a problem that didn't exist, while the real problem continued unaddressed.
Melissa Flinn, CHG's Market Research Lead, had two constraints working against a clean research process. First, physicians are one of the hardest professional audiences to reach for qualitative research — busy schedules, limited availability, and historically low response rates make even getting to a conversation a win. Physicians are also frequently between things: in transit, between patients, or on call — making video participation impractical for most. Audio was the only format that could realistically fit their lives. Second, the project had no budget. Not a limited budget — no dedicated budget at all. The team needed a method that could deliver real qualitative depth without the cost that traditionally comes with it.
Traditional in-depth interview programs — with human recruiters, a moderator, and a full debrief process — take 6–8 weeks and carry a significant cost per study. That timeline and price tag weren't compatible with what the team had to work with. A faster, leaner method was worth exploring, even if it meant trying something new.
Melissa had been on a Quirks webinar where she saw Listen Labs present their AI-moderated research platform. After a demo, she started exploring comparable solutions — and found User Intuition. The comparison wasn't complicated: User Intuition offered the same AI-moderated interview format with a setup experience that felt streamlined from the first login.
The alternative on the table was a traditional qualitative approach — win-loss research firms, human recruiters, scheduled moderated sessions, manual synthesis. That path typically runs $10,000–$15,000 per engagement, and takes weeks to field. With no budget and an executive team expecting answers fast, it wasn't a real option. User Intuition offered something the traditional market didn't: the depth of qualitative interviews at a cost and speed that a zero-budget project could actually use.
AI Moderation, CRM Participants, a Hard-to-Reach Audience
Setup was straightforward. The platform was easy to navigate, the study configuration felt logical, and the team was up and running fast. Most of what felt like project delay was healthcare legal review — not the research platform itself.
CHG sourced participants from their own CRM — a database of physicians who had previously taken locum tenens assignments. The audience was deliberately narrow: a specific professional segment with notoriously low survey response rates. Getting enough physicians into conversations took persistent outreach and follow-up, the kind of effort any researcher would put in for a high-value B2B audience. Once they were in, they showed up.
During an early walkthrough of the AI moderator, Hannah N. on CHG's media team said it plainly: "I'm just shocked at how not robotic it is." The moderator asked the right questions, followed threads, and didn't telegraph where it was trying to go — letting physicians surface their own reasoning instead of anchoring them to a hypothesis. That single design choice is why the next section happened the way it did.
The Assumption Was Wrong
Going in, the hypothesis was clear: physicians were walking away for better pay at competing firms. The team fully expected the interviews to confirm it.
They didn't.
What surfaced instead was a pattern of process friction and upfront information gaps. Physicians weren't leaving because the money was better elsewhere. They were leaving because the assignment details they needed weren't available when they needed them — what the job actually entailed wasn't arriving early enough, or completely enough, to hold their commitment through the acceptance phase.
This is the kind of finding that wouldn't have surfaced from a survey. It wouldn't have surfaced from a sales rep debrief. It surfaced because the AI moderator probed without pushing — opening up context rather than confirming a predetermined frame — and the physicians talked through their real experience. The friction points came out of what they actually said, not what the team expected to hear.
Enlightening, in Melissa's words. And actionable.
The insights landed across the team — surprising more than a few stakeholders along the way. The findings pointed at a concrete operational fix: streamlining CHG's intake process so physicians have what they need about an assignment upfront, before they commit, not after.
That work is moving forward. The research didn't just answer a question — it pointed at the next decision.
And it isn't stopping there. CHG is already planning ongoing physician research as a continuous capability instead of a one-off project, with parallel use cases for internal audiences — sales reps and other teams running quick on-demand conversations on their own schedule. Whenever the next question about physicians or advanced practice providers comes up, the team has a tool that can answer it in days, not quarters.
We went in expecting to hear that physicians were leaving for more money at competing firms. What the interviews showed us was something entirely different — it was all about process friction and the upfront details they weren't getting. We were really pleased with how good the quality was, and the actionability that came from it.Melissa F. Market Research Lead, CHG Healthcare
The Bottom Line
CHG Healthcare came in to confirm a pricing story. They left with an operational insight they could actually act on.
It happened at a cost their original budget didn't include, in a timeline traditional qualitative research couldn't have matched, with a methodology rigorous enough to overturn a hypothesis the team had been confident in for years.
For healthcare teams investigating provider experience, retention drivers, or workflow friction, this is the new playbook: skip the assumption, hear from physicians directly, and find out what's actually driving behavior — before another round of decisions gets made on a hunch.