Why Direct Pricing Questions Fail
“How much would you pay for this?” produces unreliable data. Users anchor to their current spend and provide numbers that seem reasonable rather than numbers that reflect actual willingness to pay. Overstating willingness is common (social desirability bias). Understating is equally common (strategic underreporting to influence pricing).
The fix is behavioral pricing research: reconstructing how users actually evaluate and respond to pricing rather than asking them to predict what they would do.
Behavioral Pricing Interview Techniques
Technique 1: Plan Selection Reconstruction
“Walk me through how you chose your current plan.”
This surfaces the actual decision process: which features drove the selection, what felt unclear on the pricing page, whether the chosen plan was the right fit or a compromise, and what almost made them choose differently.
Technique 2: Value Perception Mapping
“What do you feel like you’re paying for? What are you paying for but not using?”
Reveals perceived value versus actual usage. Features that users pay for but do not use represent pricing vulnerability — they will downgrade when they notice. Features they value but do not have represent upgrade potential.
Technique 3: Price Elasticity Through Scenarios
“If the price doubled tomorrow, what would you do?”
The response reveals price sensitivity through behavioral prediction rather than abstract willingness-to-pay. “I’d cancel” = high sensitivity. “I’d complain but stay” = strong value lock-in. “I’d need to get approval” = organizational decision layer.
Technique 4: Problem-Cost Anchoring
“How does the cost of [Product] compare to the cost of the problem it solves?”
This reframes pricing against value rather than against competitors. If users perceive a $100K problem and a $10K solution, pricing power exists. If they perceive a $5K annoyance and a $10K tool, the gap needs addressing.
Technique 5: Competitive Price Context
“What other tools in this price range do you pay for? How does the value compare?”
Anchors your pricing against the user’s actual spending context and reveals where they slot you in their budget hierarchy.
Common Findings from SaaS Pricing Research
Packaging mismatch: Plan tiers do not align with how users actually use the product. The “Pro” plan includes features for analysts while the users on that tier are all marketers.
Upgrade barriers: Users want a feature on the next tier but the price jump is too large relative to the incremental value they perceive.
Feature awareness gaps: Users on lower tiers do not know about features on higher tiers that would solve their problems — an expansion opportunity hidden by poor upgrade communication.
Value articulation failure: Users cannot explain the ROI to their finance team — not because the ROI does not exist, but because the product does not equip them with the language to make the business case.
Competitive price anchoring: Users compare your pricing to a competitor’s lower tier even though the capabilities differ. Perception management, not price reduction, is the fix.
Running the Study
Use the pricing research template with 30-50 AI-moderated interviews across plan tiers. Total cost: $1,800-$2,800 including incentives.
Conduct pricing research semi-annually and before any pricing change. Store findings in the Intelligence Hub to track how price perception shifts over time.