Two ways to run research with Synthetic Users and why the difference matters

Iris, what is the difference of using agents to accelerate research.

We recently ran the same research topic through two Synthetic Users workflows: 

1. A regular research study

2. an identical research study with IRIS enabled.

__wf_reserved_inherit

We applied both workflows to an exploration of the experience of medical workers on the night-shift (very tricky cohort to recruit with traditional research methods), looking at how nights unfold, how energy and mood change, and how they use short breaks or downtime. The audience and goal were identical in both cases. What changed was how the research was guided.

Both used AI participants.
Both produced strong insights.
But the outputs felt very different.

The difference wasn’t automation. It was guidance.

Some of this difference showed up even in how the research was set up, including how synthetic users were configured and how much direction was applied during the process.

The same platform, two approaches: seeing what emerges first versus guiding the work earlier:

__wf_reserved_inherit

To IRIS, or not to. A question of guidance

1. Iris OFF

In this mode, the system explores more freely and you get: broad coverage, unexpected patterns, edge cases and nuance, and a rich view of the landscape.

Ideal when you’re asking: “What’s going on here?”

The tradeoff: the output can be dense and may require more synthesis.

2. Iris ON

In this mode, you actively steer the process and get: a clearer structure, stronger narrative, faster convergence, and insights that are easier to act on.

Works best when you’re asking: “What do we need to understand or decide?”

The tradeoff: less sprawl and more focus.

Here’s a side-by-side comparison:

__wf_reserved_inherit

Same system, different outcomes

What surprised us most was that both approaches surfaced many of the same core insights.
What changed was how those insights were shaped.

- Exploratory preserves ambiguity, IRIS resolves it.

- One prioritises discovery while the other prioritises clarity.

Why this matters

As AI becomes native to research, the real question is not:

“Should research be automated?”

 But:

“How much direction should we apply and when?”

Working with a guided research partner

One thing that stood out in the workflow guided by IRIS was how different the starting point felt. Instead of needing a fully formed plan, we were able to begin with a rough idea and shape it through an ongoing conversation with IRIS.

__wf_reserved_inherit

That conversation wasn’t just about the research topic itself. We could ask questions about how to approach the work, get suggestions on where to dig deeper, and understand what was happening behind the scenes as the research evolved. IRIS acted less like a tool executing instructions and more like an experienced research partner helping pressure-test assumptions, suggest angles we hadn’t considered, and flag tradeoffs along the way.

__wf_reserved_inherit

As the work progressed, that guidance became more intentional. IRIS asked how we wanted the final report to be framed, what sort of stakeholders it was for, and whether there were specific areas of insight we wanted to emphasise. That made it easier to move from exploration to something more deliberate, without losing the richness of the underlying research.

__wf_reserved_inherit

What we didn’t anticipate was what this made possible next. Another benefit of the guided workflow was how easy it was to extend the research. We ran a follow-up concept test using the same audience definition and asked IRIS to produce a combined report across both phases – something that would have taken many more steps without IRIS in the loop.

__wf_reserved_inherit

Along the way, it explained what was shared between the studies and what was newly generated, making the continuity and its limits clear.

__wf_reserved_inherit

Our takeaway

The future of research isn’t just agentic. It’s steerable.

With Synthetic Users, teams can choose when to let agents explore and when to guide them toward what matters most.

Releated Articles

More articles for you

Teaching Synthetic Users What Real People Actually Think

Synthetic Users without calibration are individually believable, but collectively wrong. The missing piece is calibration, not better models.

The Lie We Tell Ourselves About Customer Research

Most research asks what people say. The problem is people don't do what they say. This piece breaks down the gap between stated and revealed preference — and why behavioral modeling, not better interviews, is how you close it.

Two ways to run research with Synthetic Users and why the difference matters

Iris, what is the difference of using agents to accelerate research.

Synthetic Users vs digital twins

You don’t need a twin for “a parent in rural Ohio who shops weekly at Walmart, prefers fragrance-free, and has a toddler with eczema.” You sample a parent profile with relevant traits and constraints, add retail and dermatology context, and generate behaviors consistent with both.

Two major papers. One shared direction.

LLM-powered Synthetic Users have crossed from concept to validated method. This proves they can predict human behavior accurately, letting teams run fast, low-cost behavioral experiments without replacing real participants.

Gartner says we lead. That's kind of them.

Gartner’s latest report on AI-powered synthetic user research cites Synthetic Users as a leader.

Introducing Shuffle v2

Shuffle v2 is a feature that intelligently shuffles between multiple large language models via a routing agent to produce more realistic, diverse Synthetic Users with better organic parity.

Chain-of-feeling

Synthetic Users use a “chain-of-feeling” approach—combining emotional states with OCEAN personality traits—to produce more human-like, realistic user responses and yield richer UX insights.

Generative Agent Simulations of 1,000 People

A paper that thoroughly executes a parity study between Synthetic and Organic users.

Cover image for the article: 21 peer-reviewed papers supporting the Synthetic Users thesis

21 Peer reviewed papers that support the Synthetic Users thesis

Here is a compilation of all the papers that help make a case for Synthetic Users.

Why we shuffle between models — to ensure both parity and diversity!

Synthetic Users balances aligned and unaligned models to maintain diversity and authenticity in simulated users while ensuring ethical standards and user expectations are met.

Latest press articles for Synthetic Users

Synthetic Users and AI are transforming research methodologies, offering innovative, cost-effective alternatives to traditional human subject studies.

Comparison studies. The opportunity lies in the deviation.

When we compare different studies, especially looking at what synthetic (artificial interviews) and organic (real-world interviews) data tell us, we often find they mostly talk about the same things but there's also a bit where they don't match up. This gap is super interesting because it's like finding hidden treasure in what we thought we knew versus what we might have missed.

How we deal with bias

Harnessing the power of AI in our Synthetic Users, we strive for a balance between reflecting reality and ethical responsibility, ensuring diversity and fairness while maintaining realism.

The transition to Continuous Insight

The transition towards Continuous Insight™ aligns research activities more closely with the dynamic needs of the business and ensures that product development is continuously informed by up-to-date user insights.

The Art of the Vibes Engine

Large language models (LLMs) like GPT-4 serve as powerful "vibes engines," empathizing with diverse groups and generating contextually relevant content. Their applications span market research, customer support, user experience design, and mental health support, offering invaluable insights and personalized experiences. While not infallible sources of truth, LLMs enable creativity, personalization, and connection within the realm of human language.

There is a faster and more accurate way to do research. Use Synthetic Users.

How Synthetic Users is changing the research process.

The wisdom of the silicon crowd

In the light of an ancient parable, we explore a new paper that dives into how ensembles of large language models match the prediction accuracy of human crowds. It reveals that combining machine predictions with human insights leads to the most robust forecasting results.

Three research papers that helped us build ❤️ Synthetic Users

For the sceptics amongst us who need more tangible research in order to engage with this brave new world. Full disclosure: we are part of the sceptics.

What is RAG and why it’s important for Synthetic Research

Ahead of our RAG launch we explain Retrieval-Augmented Generation (RAG) and how it enhances Synthetic Users by providing increased realism, contextual depth, and adaptive learning, with profound implications for market research, user experience testing, training, education, and innovative product development.

Synthetic Users system architecture (the simplified version).

Foundation models underpin Synthetic Users with advanced capabilities, enhanced by synthetic data and RAG layers for realism and business alignment, all within a collaborative multi-agent framework for richer interactions.

Saturation score. How do we know how many interviews to run?

Determine your interview target for achieving topic saturation using our efficient approach, leveraging the historical wisdom of research pioneers. This method ensures deep insights with theoretical sampling at its core.

How Synthetic Users are gaining depth

Synthetic Users are evolving to address criticism about their generalist nature by incorporating representative data sets and personal narratives.

How we compare interviews to ensure we improve our Synthetic Organic Parity — 85 to 92%

How do we know we are right? How do we know our Synthetic Users are as real as organic users? We compare.

Synthetic Users: Merging Qualitative and Quantitative Research, in seconds.

At Synthetic users we are blurring the lines between qualitative and quantitative research. Here's how we are going about this transformative approach.

Signup to our newsletter

AI-powered user research platform that replaces traditional participant recruitment with synthetic agents. Get research-grade insights in minutes, not weeks.

© 2026 Synthetic Users Inc.

Signup to our newsletter

AI-powered user research platform that replaces traditional participant recruitment with synthetic agents. Get research-grade insights in minutes, not weeks.

© 2026 Synthetic Users Inc.

Signup to our newsletter

AI-powered user research platform that replaces traditional participant recruitment with synthetic agents. Get research-grade insights in minutes, not weeks.

© 2026 Synthetic Users Inc.