← Insights & Guides · 13 min read

How to Track Brand Health with User Intuition (Step-by-Step)

By

Brand health tracking on User Intuition is an eight-step workflow: Create Study, Customize Plan, Review Goals, Setup Interview, Test Conversation, Launch, Invite Participants, and Review Insights. The product is built so a brand manager, an insights lead, or a CMO can launch a quarterly tracking wave in an afternoon, then watch consumer transcripts, satisfaction-rated calls, and synthesized reports show up inside the Intelligence Hub within 48-72 hours. Pricing sits at roughly $20/interview on the Pro plan, and the 4M+ research panel covers 50+ languages, which means a continuous quarterly program is financially viable even at mid-market consumer brand scale. This tutorial walks through each of the eight steps with the verbatim labels you will see in the product and the choices that matter most for brand health specifically.

For the broader methodology, see the methodology guide. For headline pricing, proof points, and the comparison against Kantar, Ipsos, and YouGov BrandIndex, see the User Intuition brand health tracking page. This piece stays in the workflow lane: what to click and why each setting matters when the question is whether perception is shifting in your favor.

What is brand health tracking, briefly?


Brand health tracking is the longitudinal measurement of how consumers perceive your brand across awareness, consideration, preference, trust, purchase intent, brand associations, competitive positioning, and equity drivers. The discipline sits between two failed substitutes: ad-hoc consumer surveys, which capture a single moment in time and rarely repeat, and annual syndicated trackers, which take 6 to 8 weeks per wave and cost $25K to $75K per year. Neither produces the trend data brand teams actually need.

Done well, brand health tracking answers four questions every quarter. First, has perception shifted since the last wave, and which associations moved? Second, what is driving that shift, the campaign we ran, a competitor’s launch, or category-level pressure? Third, where is our brand vulnerable in the consideration set? Fourth, what is the message that lands with the segment we want to win, in their own words?

The mechanism that makes brand tracking work is repeatability layered with depth. Identical methodology across waves makes results comparable. A 30-minute conversation with a category buyer, with 5 to 7 levels of structured probing on the why behind brand preference, surfaces the belief structures a fixed-attribute survey will never reach. Trust in ingredient quality, perceived innovation pace, channel-specific brand impressions, the emotional resonance of a tagline, these are the patterns underneath the awareness number. They show up only in conversation.

Modern AI-moderated platforms compress the timeline from weeks to 48-72 hours and the cost from $50K+ per wave to $200 floor pricing. Instead of one or two trackers per year, brand teams can run quarterly qualitative waves of 30 to 50 interviews, feed every transcript into a searchable Intelligence Hub, and watch perception trends compound wave by wave. The workflow below is built around that cadence.

Why doing this on User Intuition is different

Most brand tracking tooling makes you choose between depth and speed. Quantitative trackers are scaled and shallow. Qualitative consultancies are deep and slow. User Intuition is built to deliver both.

The AI moderator runs 30-minute structured laddering conversations with each consumer and probes 5 to 7 levels deep on brand associations, competitive comparisons, and the moments that shape preference. Every conversation runs the same methodology, so a wave with 30 interviews and a wave with 300 are directly comparable across quarters. There is no moderator drift, because there is no human moderator in the room. The 98% participant satisfaction rate reflects how the experience lands.

Recruitment is built in. The 4M+ research panel covers 50+ languages and lets you target by demographics, geography, category buying behavior, and segment without building a list. Country and language selection happen at the Invite Participants step, so a multi-market brand tracker runs from one study definition with results indexed in a single Intelligence Hub session. Customer-side recruitment is also a button: import your loyalty list from Salesforce or HubSpot when you want to layer existing-customer perception against panel-recruited prospects.

Synthesis runs in the same product. Once interviews complete, the Reports tab generates an Executive Summary, Top Insights, Detailed Analysis, and Recommendations, all grounded in verbatim consumer quotes. The Intelligence Hub layers across waves, so a Q1 wave and a Q4 wave surface compounding shifts that any single-wave report would miss. See the brand health tracking solution page for the full positioning.

The 8-step walkthrough


The product flow is the same for every study type. The choices below are tuned for brand health tracking specifically.

Step 1: Create Study

Open the New Study screen and you will see six pre-built templates: Win/Loss Analysis, Churn Analysis, NPS and CSAT, Customer Onboarding, Brand Health, and Custom Design. Pick the Brand Health card. The selected card highlights with a dark background. Click Save & Continue.

The Brand Health template is purpose-built for “What do people think about us?” research. It pre-loads a conversation flow covering unaided and aided awareness, brand associations, competitive consideration, perceived differentiation, and the emotional drivers behind preference. The template handles quarterly trackers, pre/post-campaign measurement, repositioning research, and competitive threat response.

Brand-health-specific tip: the same template handles category buyers, current customers, and competitor loyalists in one study. Mixing audiences inside a single wave is where the highest-leverage intelligence lives, because the contrast between how your buyers describe you and how a competitor’s loyalists describe you is the equity driver map. See Choose Your Study Type for the full template comparison.

Step 2: Customize Plan

The Customize Plan screen opens a chat with Charles, our AI researcher. Type your context into the chat box and Charles drafts the full research plan: Objective, Background, North-Star Learning Goal, Key Sub-Questions, Conversation Flow, and Interviewer Guidelines, all in a panel on the right.

For brand health, give Charles four things in your opening message. First, the brand: “We are tracking {brand name}, a {category} brand positioned around {core proposition}.” Second, the category context: “Our category is {category}, dominated by {Competitor A} and {Competitor B}.” Third, the audience: “We measure perception among category buyers in the past 90 days, with sub-cohorts of current customers and competitor loyalists.” Fourth, the competitor set you want direct comparison against. Charles will return a research plan that asks about awareness, associations, consideration drivers, and the why behind every shift.

Brand-health-specific tip: tell Charles which associations you already think you own and which you suspect are eroding. He uses your hypotheses to design probes that test or break them, which keeps the interviews from drifting into ground you have already covered. Read the Customize Plan docs for the full set of context, scope, and objective questions Charles will ask.

Step 3: Review Goals

The Review Goals screen opens the research plan in a rich text editor. Study Name sits at the top (Charles suggests one, like “Q2 Brand Health, {Brand} vs Category”; rename it to fit your internal naming convention so quarterly waves stack cleanly in the Intelligence Hub). Below that, every section of the plan, Objective through Interviewer Guidelines, is editable. Highlight, retype, add, delete. Changes save automatically.

This is the step where most brand health programs go off the rails. Default plans cover awareness, consideration, and preference, the same metrics every quantitative tracker reports. The trick is to add explicit coverage of the why. Under Key Sub-Questions, add probes like “What comes to mind when you think about {brand}, before any prompts?”, “How would you describe {brand} to a friend?”, and “What would need to change about {brand} for you to choose it over {top competitor}?” Most brand trackers fail because they only track the score, not the driver.

Brand-health-specific tip: keep Key Sub-Questions to 4 to 6 themes with 2 to 4 questions each. Longer plans mean longer interviews mean lower completion rates. The Review Study docs have the full editing toolbar reference. Click Save & Continue when the plan reads the way you want.

Step 4: Setup Interview

The Setup Interview screen handles the participant-facing experience. Two voice cards: Elliot (Male, American) and Clara (Female, American). Click the play button on each card to preview the voice. Pick the one whose tone fits your buyer audience; click the card to select it. Below the voice cards, the Default Mode toggles control format: Chat, Audio, or Video. Audio is the recommended default for brand health because voice transcripts capture tone, hesitation, and emotional resonance behind brand associations that chat will not.

Below the format toggles is the screener questions section. Add the highest-leverage brand health screener: “In the past 90 days, have you purchased a product in the {category}?” Add a second screener for the cohort split: “Are you currently a customer of {brand}?” with branches for Yes (current customer), No, but I have considered it (consideration set), and No, never considered (non-buyer). The cohort tag flows through to the Reports tab so association language can be filtered by audience.

Brand-health-specific tip: pick Elliot for category audiences that skew male and Clara for audiences that skew female. The match between moderator voice and participant demographic measurably improves rapport and depth. Read the Setup Interviewer docs for the full mode-selection guide.

Step 5: Test Conversation

The Test Conversation screen prompts you with “Ready to test your study?” Click Start Test Conversation and you experience the full participant flow: greeting, study introduction, the actual research questions, AI probing on your answers, and a wrap-up. Test conversations do not count against your usage limits, so you can test as many times as you need. After the test, a feedback screen asks “How was the conversation?” with Good or Needs Work options.

For brand health, run a test conversation to verify two things. First, the moderator pronounces the brand name correctly throughout, especially for brands with non-English origins or unusual spellings. Mispronunciation breaks rapport instantly. Second, the AI probes brand associations naturally rather than reading from a list. Listen for “What else comes to mind?” not “On a scale of 1 to 10…”

Brand-health-specific tip: if the AI accepts a generic association like “good quality” without probing, go back to Step 3 and tighten the Interviewer Guidelines with “Probe at least three levels deep on every brand association. If the participant says ‘good quality,’ ask what specifically they mean, then ask what evidence supports the belief.” Then re-test. The Test Conversation docs cover what to look for during testing.

Step 6: Launch

The Launch screen shows three summary cards (Study Type, Interviewer, Interview Type), the Study Name, and the full Research Plan Preview. Final review checklist sits in three accordions: Research Plan, Interviewer Settings, Study Configuration. When everything looks right, click the Save and Launch button (the one with the rocket icon). Your study goes live immediately. You are redirected to the Study Dashboard.

The detail brand teams miss: launching does not automatically send invitations. The study is live and ready to receive participants, but recruitment happens in the next step. This is intentional, because most teams want to set up the study config first, review it with the team, and then trigger recruitment on a specific day.

Brand-health-specific tip: launch on a consistent quarterly cadence and a consistent week-of-quarter, with mid-month launches preferred. Mid-month days see the highest completion rates among working consumers, and consistent timing protects your trend data from seasonality drift. A Q1 wave launched in mid-February, a Q2 wave launched in mid-May, a Q3 wave launched in mid-August, a Q4 wave launched in mid-November is the safe pattern. The Launch docs cover everything that can and cannot be changed post-launch.

Step 7: Invite Participants

This step opens after launch. From the Study Dashboard, the Invites tab plus the Actions dropdown gives you four invitation methods that map directly to brand health recruitment patterns.

Research Panel is the default for brand health. Filter by category buying behavior in the past 90 days, set demographics, set country (NA, LATAM, EU panels available), select language (50+ supported), send. For multi-market tracking, the same study can run parallel English, Spanish, French, and German cohorts indexed in one Intelligence Hub. Invite Your Customers imports current customers manually, via CSV, or directly from Salesforce or HubSpot. Use this to layer customer perception against panel-recruited prospects in the same wave. Share Link generates a unique URL for loyalty emails, packaging QR codes, or Zapier flows. Embed Widget drops always-on perception capture into your customer portal or post-purchase email for continuous brand listening.

Brand-health-specific tip: combine methods. Panel recruitment for the prospect/competitor-loyalist cohorts, customer import for the existing-customer cohort. The contrast between the two is where competitive vulnerability and equity strength become visible. The Recruiting overview covers all four methods side by side.

Step 8: Review Insights

Once interviews complete (the first ones inside hours, full waves within 48-72 hours), three tabs hold the output. The Calls tab lists every interview with email, status, date, duration, quality rating (High, Medium, Poor), and end reason. Click any row to expand the full transcript with audio playback. The Reports tab holds the AI-synthesized analysis: Executive Summary, Total Participants, Top Insights at a Glance, Detailed Analysis with verbatim quotes, and Recommendations. Click Generate Report once you have at least 2 completed interviews; click Regenerate Report as more land.

The third surface is the Intelligence Hub, accessed from the left navigation. Each Hub session is a workspace where you sync waves, upload supplementary files (PDFs, CSVs, syndicated tracker exports), and ask natural-language questions across everything in the room. For a quarterly brand health program, this is the compounding layer.

Brand-health-specific tip: create a single Intelligence Hub session called “Brand Health” at the start of the year and sync every quarterly wave into it. Then ask questions like “How has consumer language about our quality evolved across the last four quarters?” or “Which associations did our spring campaign actually shift?” Pair Hub queries with the cohort filter to see whether perception is improving with new buyers or stable with existing ones, the single most actionable view a single-wave report cannot replicate. The Intelligence Hub docs and the Reports docs cover the full output surface.

Mini case study


Turning Point Brands ran the workflow above on a cohort of category consumers during a brand campaign rollout. The team set up the study in a single afternoon: Brand Health template selected at Step 1, brand and category context briefed to Charles at Step 2, learning goals tightened at Step 3 to include explicit probes on which associations the new campaign was supposed to shift, Audio mode and category buyer screeners configured at Step 4, a test conversation run at Step 5, study launched mid-quarter at Step 6.

Recruitment used the Research Panel filtered by category buyer status and target demographics. Interviews completed within 72 hours.

The Reports tab and the Intelligence Hub surfaced a finding that contradicted the assumed campaign result. Awareness had moved as expected, but the specific brand association the campaign was designed to build had not shifted at all. Consumers noticed the campaign but did not update their core beliefs about the brand. The team adjusted messaging mid-campaign rather than waiting for the post-campaign tracker to confirm the miss months later. Purchase intent improved by 23% on the revised creative.

“User Intuition helped us understand that our campaign moved awareness but didn’t shift brand perception. We adjusted messaging mid-campaign and saw a 23% improvement in intent. This would’ve been invisible in traditional trackers.”

Eric O., Chief Commercial Officer, Turning Point Brands

What gets the best results from brand health tracking on User Intuition?


Tips for getting the most out of the workflow, gathered from the patterns that consistently produce trend clarity and actionable shifts.

Protect the tracking question set. Aim for 4 to 6 main themes with 2 to 4 sub-questions each at Step 3. The six to ten questions that measure your core brand health metrics should appear in every wave, word-for-word, in the same sequence. Method drift is the single most common reason brand tracking programs produce misleading data.

Run a test conversation every wave. Even on a study you have run before, a test catches tone misalignment, pronunciation issues, or shallow probing while it is still cheap to fix.

Pick voice over chat for brand health. Voice transcripts surface tone, hesitation, and emotional resonance around brand associations that chat compresses out. Reserve chat for international cohorts where the participant’s language fluency makes typing a better experience.

Mix cohorts in the same wave. Current customers, consideration-set prospects, and competitor loyalists in one study produces the contrast that makes equity drivers visible. A wave with 50 interviews split across the three cohorts outperforms a wave with 50 interviews from a single audience for almost every brand-team use case.

Run quarterly, not annual. 30 to 50 interviews per quarter at $20/interview on the Pro plan compounds into a much sharper picture than one 200-interview annual tracker. The Intelligence Hub gets meaningfully more useful every quarter you feed it, and quarterly cadence catches gradual erosion before it shows up in revenue.

Sync the Intelligence Hub once per quarter. Every completed wave should land in the same Brand Health session. Cross-wave pattern recognition is the highest-leverage output User Intuition produces, and it only works if every wave lands in the same session. See the cost breakdown for quarterly programs and the comparison of brand health tracking platforms for the budgeting case.

Route findings inside 7 days. Brand intelligence degrades fast. As soon as the report and the Intelligence Hub are populated, route findings to specific owners (brand manager, creative agency, product marketing) with a defined SLA. The same compounding logic applies to brand health tracking in CPG, where shelf dynamics and seasonal effects make timely findings even more valuable. The waves that move brand metrics are the ones whose findings reach a decision-maker the same week.

FAQ


The frontmatter FAQ above answers the most common questions. Canonical answers also surface inline in the workflow.

Note from the User Intuition Team

Your research informs million-dollar decisions — we built User Intuition so you never have to choose between rigor and affordability. We price at $20/interview not because the research is worth less, but because we want to enable you to run studies continuously, not once a year. Ongoing research compounds into a competitive moat that episodic studies can never build.

Don't take our word for it — see an actual study output before you spend a dollar. No other platform in this industry lets you evaluate the work before you buy it. Already convinced? Sign up and try today with 3 free interviews.

Frequently Asked Questions

Setup takes 10 to 20 minutes from Create Study to Launch if you have your brand, your competitor set, and a few learning goals in mind. Charles, the AI researcher inside User Intuition, generates the objective, background, and learning goals from a short brief. Most teams spend more time deciding their tracking question set than configuring the study.
No. A brand manager, an insights lead, or a CMO can run a full quarterly wave without a dedicated researcher. The AI does the moderation, the synthesis, and the report. Your job is to brief Charles on the brand and the category, edit the learning goals, and review the Intelligence Hub once interviews complete.
Quarterly is the default cadence. It is frequent enough to catch gradual perception erosion, fast enough to measure campaign effects within the same fiscal year, and lean enough that the team can absorb findings between waves. Brands in fast-moving categories or post-crisis periods can run monthly waves at $20/interview without breaking the budget.
Voice for almost every brand health study. Voice transcripts capture tone, hesitation, and emotional resonance behind brand associations that chat compresses out. Brand perception is partly about feeling, and feeling shows up in voice. Audio is the recommended default in User Intuition's setup screen for brand health work.
Thirty to fifty depth interviews per wave is the standard for qualitative brand tracking. That sample is enough to identify consistent themes, map association structures across segments, and surface the equity drivers behind preference. The 4M+ research panel makes 30 to 50 interviews per quarter operationally trivial at $20/interview on the Pro plan.
Screener questions sit inside the Setup Interview step and run before the main interview begins. For brand health, the highest-leverage screener is category buyer status. Add a question like 'In the past 90 days, have you purchased a product in the {category}?' so participants without category context drop out before consuming a credit.
Yes. User Intuition supports 50+ languages, and country and language selection happens at the Invite Participants step when you build a research panel. For a multi-region brand tracking program, you can run parallel English, Spanish, French, German, and Japanese cohorts inside a single study and compare verbatim associations side by side in the Intelligence Hub.
On the Professional plan, audio interviews run at $20/interview, the marketing headline rate. Studies start at $200, with no monthly minimums and no contract. A quarterly program of 50 interviews per wave runs roughly $4,000 per year, well below the $25K to $75K range for traditional annual brand trackers.
Traditional brand trackers take 6 to 8 weeks per wave at $15K to $50K and report awareness, consideration, and preference scores without the why behind them. The User Intuition workflow lands consumer interviews and a synthesized report in 48-72 hours per wave. Charles handles methodology, the panel handles recruitment, and the Intelligence Hub aggregates waves over time so you watch perception compound rather than starting from zero each quarter.
Three places inside User Intuition. The Calls tab holds every transcript and audio recording with quality ratings. The Reports tab holds the AI-synthesized Executive Summary, Top Insights, Detailed Analysis, and Recommendations. The Intelligence Hub aggregates waves over time so you can ask cross-wave questions like 'How have consumers described our quality versus the private label across the last four quarters?'
Get Started

Put This Framework Into Practice

Sign up free and run your first 3 AI-moderated customer interviews — no credit card, no sales call.

Self-serve

3 interviews free. No credit card required.

See it First

Explore a real study output — no sales call needed.

No contract · No retainers · Results in 72 hours