The Crisis in Consumer Insights Research: How Bots, Fraud, and Failing Methodologies Are Poisoning Your Data
AI bots evade survey detection 99.8% of the time. Here's what this means for consumer research.
Practical statement of work templates for agencies adding AI-powered research capabilities to client engagements.

The hardest part of selling new research capabilities isn't explaining the technology. It's translating capability into contract language that protects both parties while enabling the work to proceed.
When agencies add voice AI research to their service portfolio, they face a documentation challenge: existing SOW templates assume human moderators, traditional timelines, and familiar deliverables. The operational reality of AI-moderated research—faster cycles, different quality controls, new participant consent requirements—doesn't map cleanly to standard agency contracts.
Our analysis of 47 agency implementations reveals that documentation gaps create three recurring problems. First, scope creep emerges when clients expect unlimited revisions to AI interview protocols without understanding the technical constraints. Second, liability questions surface around data handling and AI decision-making without clear contractual boundaries. Third, pricing disputes arise when traditional per-interview economics clash with AI's fundamentally different cost structure.
The solution isn't creating entirely new contract frameworks. It's adapting proven SOW patterns to address the specific characteristics of AI-moderated research while maintaining the legal and operational protections agencies need.
Traditional qualitative research SOWs specify deliverables like