The Crisis in Consumer Insights Research: How Bots, Fraud, and Failing Methodologies Are Poisoning Your Data
AI bots evade survey detection 99.8% of the time. Here's what this means for consumer research.
When users don't click, it's not laziness—it's weak information scent. Learn how to diagnose and fix the gap between what user...

A product team at a B2B software company spent three months building a new feature. They placed the entry point prominently in the navigation. Usage in the first month: 4% of active users. The feature worked beautifully. The problem wasn't the feature—it was the scent.
Information scent describes the perceived likelihood that a particular path will lead to desired information. When users evaluate whether to click a button, open a menu, or follow a link, they're making rapid predictions about what they'll find. Strong scent means the label, context, and visual treatment clearly signal the destination. Weak scent means users can't confidently predict where they're going, so they don't go anywhere.
Research from the Nielsen Norman Group analyzing thousands of usability sessions found that users follow information scent with remarkable consistency. When scent is strong, task completion rates exceed 80%. When scent weakens, completion rates drop below 30%. The difference isn't user capability—it's the quality of the trail you've laid.
Weak information scent creates measurable business impact. A 2023 analysis of 47 SaaS products found that features with weak entry point scent averaged 67% lower adoption than features with strong scent, even when both feature sets delivered similar value once users engaged. The gap isn't about feature quality—it's about whether users can confidently predict that clicking will be worth their time.
The cost compounds over time. When users repeatedly encounter weak scent, they develop learned helplessness about navigation. They stop exploring. A longitudinal study tracking user behavior across 12 months found that users who experienced three consecutive instances of weak scent (clicking something and finding it wasn't what they expected) reduced their overall exploration behavior by 43% in subsequent sessions. They'd been trained that clicking is risky.
This creates a vicious cycle. Product teams see low feature adoption and assume users don't want the feature. They deprioritize improvements. Meanwhile, users who would benefit from the feature never discover it because the scent trail is too weak to follow. The feature exists but remains functionally invisible to the people who need it most.
Users don't consciously think about information scent, but they're constantly evaluating it. The process happens in milliseconds, drawing on multiple information sources simultaneously.
Label semantics provide the primary scent signal. Users match words in labels against their mental model of what they're trying to accomplish. The match doesn't need to be perfect, but it needs to be confident. "Export data" creates stronger scent for someone trying to download information than "Data tools" because the semantic distance is shorter. The user doesn't have to wonder whether "tools" includes export functionality—the label makes it explicit.
Context shapes interpretation. The same label creates different scent depending on where it appears. "Settings" in a top navigation bar suggests account-level configuration. "Settings" next to a specific chart suggests chart-specific options. Users build expectations based on spatial relationships, visual hierarchy, and proximity to related elements.
Visual treatment adds another layer of information. Button styling, icon choices, and color all contribute to scent. A primary button suggests a main action path. A text link suggests supplementary information. An icon can strengthen scent when it matches user expectations (a download icon next to "Export") or weaken it when the connection is unclear (a gear icon next to "Insights").
Prior experience creates baseline expectations. Users who've used similar products bring mental models about where things "should" be. These expectations aren't universal—B2B software users expect different patterns than consumer app users—but they're powerful within domains. Fighting established patterns requires exceptionally strong scent to overcome the cognitive friction.
Weak scent manifests in specific behavioral patterns. Users hover over elements without clicking. They click, quickly scan the destination, and immediately navigate back. They repeatedly return to the same starting point, suggesting they're searching but not finding. Analytics can surface these patterns, but they rarely explain why the scent is weak.
The challenge is that weak scent often looks like other problems. Low click rates could indicate weak scent, or they could mean users don't need the feature, or they could reflect poor visibility. Distinguishing between these requires understanding user intent and expectations.
Traditional user testing can identify scent problems, but the artificial context often masks real-world scent issues. When you ask someone to complete a specific task, you're giving them a goal they might not naturally have. They'll follow weak scent because they're motivated to complete the task. In actual usage, they would have given up.
This is where in-product research creates unique diagnostic value. When users encounter navigation elements in their natural workflow, their behavior reveals actual scent strength. A research platform that can intercept users at decision points—when they're hovering over a button, when they've just navigated back from a page, when they're repeatedly accessing the same area—captures scent problems in context.
User Intuition's approach to this diagnostic challenge involves detecting behavioral signals that suggest scent confusion, then immediately asking targeted questions. When a user hovers over a navigation element for more than two seconds without clicking, the system can prompt: "What would you expect to find if you clicked here?" When a user clicks and quickly returns, it can ask: "Was this what you were looking for?" The responses reveal the gap between expectation and reality.
The methodology matters because scent problems are often subtle. A label might be 80% clear—close enough that users can eventually figure it out, but not clear enough that they confidently click. These marginal scent issues don't show up in analytics as obvious problems, but they create friction that accumulates across thousands of users.
Certain patterns of weak scent appear repeatedly across products. Recognizing these patterns helps diagnose problems faster.
Jargon-laden labels create weak scent for new users while potentially creating strong scent for experts. A B2B analytics platform labeled a feature "Cohort analysis." Experienced analysts knew exactly what this meant. New users trying to understand customer behavior patterns didn't recognize it as relevant to their goal. The scent was strong for one segment and nonexistent for another. The solution wasn't to dumb down the language—it was to provide multiple entry points with different scent profiles.
Overly generic labels force users to guess. "Tools," "Resources," "More"—these labels carry almost no scent because they could lead anywhere. Users have to click to find out what's there, which means they often don't click at all. A SaaS product consolidated several features under a "Tools" menu to clean up the navigation. Feature usage across those consolidated items dropped 41% in the following month. The features weren't worse—they were just harder to smell.
Ambiguous verbs weaken action scent. "Manage settings" could mean view, edit, configure, or delete. "Review results" could mean read, analyze, approve, or edit. The ambiguity creates hesitation. Users aren't sure what will happen when they click, so they delay the decision or avoid it entirely. Specific verbs—"Edit settings," "Approve results"—create stronger scent because the outcome is predictable.
Mismatched mental models occur when your taxonomy doesn't align with how users think about their work. A project management tool organized features by technical architecture: "Resource allocation," "Timeline management," "Dependency tracking." Users thought in terms of project phases: "Planning," "Execution," "Review." The features existed, but the scent trail didn't match the user's mental map of their work.
Icon-only navigation can create weak scent when icons are ambiguous or unfamiliar. Icons work well for universal concepts (home, search, settings) and when paired with labels. Used alone for product-specific concepts, they force users to learn your visual language before they can navigate confidently. A productivity app replaced text labels with custom icons to save space. Support tickets about "where did X feature go" increased 300% despite the features remaining in identical positions.
Quantifying information scent helps prioritize improvements and measure impact. Several metrics provide useful signals.
Click-through rate on navigation elements establishes a baseline, but interpretation requires context. Low CTR could indicate weak scent or low need. High CTR doesn't necessarily mean strong scent—it could mean users are clicking because they can't find a better option.
Immediate bounce rate—the percentage of users who click and immediately return—directly measures scent accuracy. If 40% of users who click "Reports" immediately navigate back, the label is creating expectations that the destination doesn't fulfill. This metric isolates scent problems from other navigation issues.
Time to first click on a page measures hesitation. When users land on a page and pause for extended periods before taking action, they're struggling to predict which path leads to their goal. Long hesitation times suggest weak scent across multiple options.
Return navigation patterns reveal search behavior. When users repeatedly return to the same hub or menu, they're hunting. They're following scent trails that peter out, then returning to try another path. High return rates to specific navigation points suggest the outbound scent from those points is consistently weak.
Qualitative scent measurement asks users directly about their expectations. Before they click, what do they expect to find? After they click, did they find it? The gap between expectation and reality quantifies scent accuracy. Research conducted across 23 products found that when expectation-reality match exceeded 85%, task completion rates averaged 79%. When match fell below 60%, completion rates dropped to 31%.
Improving information scent requires understanding the specific gap between user expectations and actual destinations. Generic improvements rarely work because scent problems are contextual.
Start by mapping user goals to navigation paths. What are users actually trying to accomplish? What words do they use to describe those goals? How do those words map to your current labels? A financial services company discovered that users searched for "pay bills" but their navigation said "transactions." The scent gap was semantic—same concept, different vocabulary. Changing the label to "Pay bills & transfers" increased usage 34%.
Test labels with target users before implementation. Show them the label in context and ask what they expect to find. If expectations vary widely or don't match the actual destination, the scent is weak. This testing doesn't require elaborate studies—five users revealing consistent expectation gaps provides actionable signal.
Strengthen scent through progressive disclosure. Instead of hiding everything behind generic labels, provide enough scent in the initial view that users can confidently predict whether to explore further. A navigation menu that shows "Reports" with no additional context creates weak scent. The same menu showing "Reports: Sales, Inventory, Customers" creates stronger scent because users can evaluate relevance without clicking.
Use spatial relationships to reinforce scent. Place related items near each other. Create visual groupings that match user mental models. Position high-scent entry points in high-visibility locations. A healthcare app moved its most-used features from a hidden menu to the main screen, but usage didn't increase proportionally. The problem was spatial—users had learned to find these features in the menu, and the new location violated their expectations. Scent isn't just about labels—it's about predictable patterns.
Provide multiple scent trails to the same destination for different user segments. Expert users might follow strong scent from technical labels. Novice users might need more descriptive language. Rather than choosing one approach, create multiple entry points with different scent profiles. An analytics platform added a "Getting started" section with beginner-friendly labels while maintaining the existing technical navigation. Usage among new users increased 67% without disrupting expert workflows.
AI-powered research tools can diagnose scent problems at scale by detecting behavioral patterns that suggest scent confusion and immediately gathering context about user expectations. Traditional analytics show that users aren't clicking, but they don't explain why. AI-moderated research can intercept users at the moment of hesitation and ask targeted questions.
The timing matters enormously. Asking users about navigation expectations in a post-session survey produces generic, rationalized responses. Asking them in the moment—when they're actively evaluating whether to click—captures authentic expectations and decision-making processes.
This approach scales in ways traditional user testing cannot. You can't afford to have researchers watching every user interaction, but you can deploy AI that detects scent-related behaviors and systematically gathers data about expectations. Over hundreds or thousands of interactions, patterns emerge that reveal which labels create confusion, which visual treatments strengthen or weaken scent, and how different user segments interpret the same navigation elements differently.
Platforms like User Intuition apply this methodology by combining behavioral detection with adaptive questioning. When the system identifies a potential scent problem—extended hover time, repeated returns to the same navigation point, quick bounces—it can initiate a brief conversation to understand the user's mental model and expectations. The AI adapts its questions based on previous responses, probing deeper when answers reveal interesting gaps.
The resulting data creates a detailed map of scent strength across your product. You can see exactly which labels create confusion, what users expect versus what they find, and how expectations vary across user segments. This diagnostic precision makes scent optimization systematic rather than guesswork.
As products grow, maintaining strong information scent becomes exponentially harder. A simple product with 10 features can create clear scent for each. A complex product with 100 features faces impossible tradeoffs—there isn't enough navigation real estate to make everything obvious.
This is where scent strategy matters more than scent tactics. You can't create strong scent for everything, so you need to prioritize based on user goals and frequency. High-value, high-frequency paths deserve the strongest scent. Lower-frequency features can tolerate weaker scent if they're discoverable through search or contextual placement.
Layered navigation helps maintain scent in complex architectures. The top layer provides strong scent for major categories. Secondary layers provide progressively more specific scent. Users can navigate confidently because each layer reduces ambiguity. A project management platform organized features into "Plan," "Execute," "Review" at the top level. Each category expanded to show specific features with strong scent: "Plan" revealed "Timeline," "Resources," "Budget." Users could navigate three levels deep while maintaining confidence about where they were going.
Contextual scent adapts based on user state and history. If a user frequently accesses certain features, strengthen scent for those features in their interface. If they've never used a category of features, provide additional scent information to help them evaluate relevance. Static navigation creates the same scent for everyone. Adaptive navigation creates stronger scent for each person's actual needs.
Sometimes improving scent exposes issues you didn't know existed. When you make it easier for users to find something, and they still don't use it, the problem isn't navigation—it's the thing itself.
A SaaS company strengthened scent for their analytics features based on user research showing that the original labels were confusing. Clicks increased 45%, but actual usage of the analytics remained flat. Users were clicking, looking, and leaving. The strong scent successfully communicated what was there, which allowed users to quickly determine it wasn't what they needed. The problem wasn't discoverability—it was that the analytics didn't solve user problems.
This diagnostic value is important. Weak scent masks other problems. Strong scent reveals them. If you improve scent and usage doesn't increase, you've learned something valuable: the feature itself needs work, not just its entry point.
The most effective approach to information scent is building it into design decisions from the start rather than treating it as a post-launch optimization problem.
During feature design, explicitly map user goals to navigation paths. What will users be trying to accomplish? What words will they use? Where will they expect to find this functionality? These questions should inform naming and placement decisions before implementation.
Test scent early with low-fidelity prototypes. Show users wireframes or sketches with proposed labels and ask what they expect. This catches scent problems before they're built into production. Early-stage scent testing requires minimal resources but prevents expensive post-launch navigation redesigns.
Establish scent guidelines that create consistency across features. Define your semantic framework—the vocabulary you'll use for common concepts. Document spatial patterns—where certain types of features typically appear. Create visual patterns that reinforce meaning. These guidelines help new features inherit strong scent from existing patterns rather than each feature requiring custom scent optimization.
Continuously measure scent strength for key paths. Don't wait for obvious problems. Track expectation-reality match, immediate bounce rates, and hesitation patterns. When metrics suggest weakening scent, investigate before it significantly impacts usage. Small scent degradations are easier to fix than major navigation problems.
Strong information scent creates value beyond immediate navigation success. Users who can confidently predict where paths lead become more exploratory. They try new features because the cost of exploration is low—if the scent is wrong, they've wasted only seconds. This exploration drives feature discovery and adoption.
Strong scent also reduces support burden. Many support tickets stem from users not finding features that exist. When scent clearly communicates what's where, users self-serve more effectively. A B2B software company improved scent across their main navigation and saw support tickets related to "where is X feature" drop 58% over three months.
Perhaps most importantly, strong scent builds user confidence in the product. When navigation consistently delivers what it promises, users trust the interface. This trust translates to higher engagement, longer retention, and stronger advocacy. Users don't think consciously about information scent, but they feel its effects every time they interact with your product.
The product team that saw 4% adoption of their new feature eventually ran in-product research to understand the gap. Users weren't avoiding the feature because they didn't need it—they were avoiding it because the navigation label "Intelligence Hub" created weak scent. Users couldn't predict what "Intelligence" meant in this context. After changing the label to "Customer Insights Dashboard" and adding a brief description, adoption increased to 34% within two months. The feature hadn't changed. The scent had.
When users don't click, it's rarely laziness or lack of interest. It's weak information scent. They can't confidently predict where the path leads, so they don't take the first step. Strengthening scent isn't about making things prettier or more prominent—it's about making outcomes predictable. Users will follow trails they can smell.