
The Lie We Tell Ourselves About Customer Research
Most research asks what people say. The problem is people don't do what they say. This piece breaks down the gap between stated and revealed preference — and why behavioral modeling, not better interviews, is how you close it.
For quite a while now we’ve been running a polite fiction in product development.
We ask customers what they think.
They tell us.
We build accordingly.
Then we act surprised when they don’t buy.
This isn’t a failure of effort. It’s a failure of theory.
Most human decision-making is not conscious, linear, or even particularly coherent. It’s fast, emotional, identity-driven, and post-rationalized. People decide first. Then they explain.Which means if your research model relies primarily on asking people to explain themselves — whether through surveys or 90-minute in-depth interviews — you’re building strategy on narrative, not behavior.And markets don’t reward narratives. They reward behavior.
Organic In-Depth Interviews: Powerful, but Not Predictive
Let’s be clear: Organic (what you call real/in person) in-depth interviews are serious tools.
They surface nuance. They reveal emotional language. They expose tension and contradiction. Remove qualitative research from product teams and decision quality collapses.
But interviews still operate at the level of conscious storytelling.
When someone says:
“I switched because of price.”
“I didn’t trust the brand.”
“The features weren’t compelling.”
They’re not lying. They’re reconstructing.
Underneath those statements are forces they can’t directly access:
Status preservation
Loss aversion
Effort avoidance
Identity alignment
Habit inertia
Fear of regret
You can probe. You can ladder. You can sit with silence.
But you cannot fully interview the subconscious.
Interviews reveal how users frame their world.
They don’t fully reveal how they navigate it.
And navigation is what determines revenue.
The Delta Problem
Here’s the real issue:
There is always a delta between what people say and what they do.
Call it:
The intention–action gap
The narrative–behavior gap
Or simply, human nature
In research terms, it’s the difference between stated preference and revealed preference.
And that delta is expensive.
It’s why:
“Strong purchase intent” doesn’t convert.
“Users love it” doesn’t scale.
“We tested messaging” doesn’t move revenue.
Most companies accept this gap as inevitable.
They shouldn’t.
What If We Could Minimize the Delta?
What if we had a system that didn’t stop at narrative?
What if interviews generated hypotheses — and then those hypotheses were stress-tested against modeled decision environments?
Imagine this sequence:
Conduct in-depth interviews.
Extract stated motivations, tensions, and language.
Translate those into behavioral variables — risk sensitivity, status orientation, price elasticity, cognitive load tolerance.
Simulate decisions under trade-offs.
Identify where narrative collapses under pressure.
That process doesn’t eliminate the delta.
But it shrinks it.
Instead of taking “I would pay for this” at face value, you test:
What happens when a cheaper alternative appears?
What happens when switching requires effort?
What happens when identity alignment is weak?
You’re not asking people to predict themselves.
You’re modeling the environment that forces their hand.
That’s how you reduce the gap between intention and action.
Not by interviewing harder — but by pressure-testing behavior.
The Case for Behavioral Modeling
If most decisions are subconscious, then prediction requires modeling behavior under constraint — not collecting articulated opinions.
Behavioral modeling asks:
What happens when this decision competes with status, risk, effort, and identity?
Because decisions aren’t made in calm rooms with moderators.
They’re made:
Under time pressure
In noisy environments
With incomplete information
In social contexts
Interviews flatten this environment.
Behavioral systems reintroduce it.
If you want to forecast adoption, churn, pricing sensitivity, or resistance, you need to model trade-offs — not just document testimonials.
Speed Is Strategy
There’s another structural issue: cadence.
Traditional research is episodic:
Recruit → Interview → Synthesize → Present → Decide.
Modern product development is continuous.
Features ship weekly. Positioning evolves monthly. Pricing experiments happen quarterly. Research cycles that lag product cycles create blind spots.
Rapid simulation compresses learning loops.
You can:
Stress-test positioning across psychological profiles
Model pricing sensitivity under different framing
Explore how identity alignment affects conversion
Surface failure points before engineering commits
Speed compounds.
The firms that learn faster make better bets.
Why Averages Mislead
Surveys — and even some qualitative synthesis — optimize for averages.
But markets don’t behave like averages. They behave like distributions.
Your most profitable users are often extreme.
Your biggest churn risks are rarely typical.
Behavioral modeling allows you to simulate variance:
Risk-averse vs novelty-seeking
Security-driven vs status-driven
Identity-aligned vs identity-threatened
Strategy lives in the tails.
Averages are comfortable.
Variance is profitable.
Continuous Experimentation Is the Endgame
The future isn’t fewer interviews.
It’s interviews plus behavioral modeling plus live experimentation.
Interviews generate insight.
Simulation stress-tests it.
Experimentation validates it.
Three layers:
What users say.
How modeled users behave under constraint.
What real users actually do.
The goal is simple:
Minimize the delta between narrative and action.
Because that delta is where capital goes to die.
If you believe humans are rational, self-aware, and consistent, then asking them what they think is sufficient.
If you believe humans are emotional, identity-driven, and frequently unaware of their own drivers, then you need systems that model behavior — not just collect stories.
The companies that reduce the say–do gap will outlearn and outbuild the rest.
The others will keep asking customers why they didn’t buy — and mistaking the explanation for the cause.
Releated Articles
More articles for you

The Lie We Tell Ourselves About Customer Research
Most research asks what people say. The problem is people don't do what they say. This piece breaks down the gap between stated and revealed preference — and why behavioral modeling, not better interviews, is how you close it.

Two ways to run research with Synthetic Users and why the difference matters
Iris, what is the difference of using agents to accelerate research.

Synthetic Users vs digital twins
You don’t need a twin for “a parent in rural Ohio who shops weekly at Walmart, prefers fragrance-free, and has a toddler with eczema.” You sample a parent profile with relevant traits and constraints, add retail and dermatology context, and generate behaviors consistent with both.

Two major papers. One shared direction.
LLM-powered Synthetic Users have crossed from concept to validated method. This proves they can predict human behavior accurately, letting teams run fast, low-cost behavioral experiments without replacing real participants.

Gartner says we lead. That's kind of them.
Gartner’s latest report on AI-powered synthetic user research cites Synthetic Users as a leader.

Introducing Shuffle v2
Shuffle v2 is a feature that intelligently shuffles between multiple large language models via a routing agent to produce more realistic, diverse Synthetic Users with better organic parity.

Chain-of-feeling
Synthetic Users use a “chain-of-feeling” approach—combining emotional states with OCEAN personality traits—to produce more human-like, realistic user responses and yield richer UX insights.

Generative Agent Simulations of 1,000 People
A paper that thoroughly executes a parity study between Synthetic and Organic users.

21 Peer reviewed papers that support the Synthetic Users thesis
Here is a compilation of all the papers that help make a case for Synthetic Users.

Why we shuffle between models — to ensure both parity and diversity!
Synthetic Users balances aligned and unaligned models to maintain diversity and authenticity in simulated users while ensuring ethical standards and user expectations are met.

Latest press articles for Synthetic Users
Synthetic Users and AI are transforming research methodologies, offering innovative, cost-effective alternatives to traditional human subject studies.

Comparison studies. The opportunity lies in the deviation.
When we compare different studies, especially looking at what synthetic (artificial interviews) and organic (real-world interviews) data tell us, we often find they mostly talk about the same things but there's also a bit where they don't match up. This gap is super interesting because it's like finding hidden treasure in what we thought we knew versus what we might have missed.

How we deal with bias
Harnessing the power of AI in our Synthetic Users, we strive for a balance between reflecting reality and ethical responsibility, ensuring diversity and fairness while maintaining realism.

The transition to Continuous Insight
The transition towards Continuous Insight™ aligns research activities more closely with the dynamic needs of the business and ensures that product development is continuously informed by up-to-date user insights.

The Art of the Vibes Engine
Large language models (LLMs) like GPT-4 serve as powerful "vibes engines," empathizing with diverse groups and generating contextually relevant content. Their applications span market research, customer support, user experience design, and mental health support, offering invaluable insights and personalized experiences. While not infallible sources of truth, LLMs enable creativity, personalization, and connection within the realm of human language.

There is a faster and more accurate way to do research. Use Synthetic Users.
How Synthetic Users is changing the research process.

The wisdom of the silicon crowd
In the light of an ancient parable, we explore a new paper that dives into how ensembles of large language models match the prediction accuracy of human crowds. It reveals that combining machine predictions with human insights leads to the most robust forecasting results.

Three research papers that helped us build ❤️ Synthetic Users
For the sceptics amongst us who need more tangible research in order to engage with this brave new world. Full disclosure: we are part of the sceptics.

What is RAG and why it’s important for Synthetic Research
Ahead of our RAG launch we explain Retrieval-Augmented Generation (RAG) and how it enhances Synthetic Users by providing increased realism, contextual depth, and adaptive learning, with profound implications for market research, user experience testing, training, education, and innovative product development.

Synthetic Users system architecture (the simplified version).
Foundation models underpin Synthetic Users with advanced capabilities, enhanced by synthetic data and RAG layers for realism and business alignment, all within a collaborative multi-agent framework for richer interactions.

Saturation score. How do we know how many interviews to run?
Determine your interview target for achieving topic saturation using our efficient approach, leveraging the historical wisdom of research pioneers. This method ensures deep insights with theoretical sampling at its core.

How Synthetic Users are gaining depth
Synthetic Users are evolving to address criticism about their generalist nature by incorporating representative data sets and personal narratives.

How we compare interviews to ensure we improve our Synthetic Organic Parity — 85 to 92%
How do we know we are right? How do we know our Synthetic Users are as real as organic users? We compare.

Synthetic Users: Merging Qualitative and Quantitative Research, in seconds.
At Synthetic users we are blurring the lines between qualitative and quantitative research. Here's how we are going about this transformative approach.