Two ways to run research with Synthetic Users and why the difference matters
Iris, what is the difference of using agents to accelerate research.
Foundation models underpin Synthetic Users with advanced capabilities, enhanced by synthetic data and RAG layers for realism and business alignment, all within a collaborative multi-agent framework for richer interactions.
‍
But why shouldn't I go directly to a GPT like Claude, or Gemini? Because going straight to “a GPT” gives increasingly hyper-rational answers that don’t read like real (Organic) customers. People are smart, but they use shortcuts and are influenced by subconscious drivers. We fix that by:
‍
‍
‍
‍
‍
‍
‍
‍
We’re model-agnostic. A lightweight router selects—and sometimes sequences—multiple LLMs (and can aggregate outputs). This hedges the failure modes of any single model and improves realism.
What you control: the task (“evaluate onboarding flow”), audience hints, and constraints (jurisdiction, tone, risk).
What we adjust: model choice/order, temperature, aggregation, and guardrails.
‍
‍
We don’t just generate a random OCEAN profile. We remap behavior to personality and calibrate to real populations:
‍
Short version: OCEAN is the language, but calibration is the guarantee that our personalities line up with the organic world.
‍
‍
At answer-time we retrieve facts from your interviews, surveys, CRM notes, product/docs and ground responses. No retraining required; updates flow through immediately. (Fine-tuning, when used, is separate and not required for grounding.)
‍
‍
Agents coordinate and learn from outcomes rather than relying on one monolithic prompt.
‍
‍
We capture misses, contradictions, weak coverage, parity deltas, and calibration drift. That data updates routing, prompt templates, and personality remapping so interviews get sharper on your audience and edge cases.
‍
‍
Task: “Clinical interview with an oncologist in Berlin about trial enrollment UX.”
‍
‍
‍
‍
‍
The RAG layer plays a pivotal role in tailoring Synthetic Users to specific business needs by integrating domain-specific knowledge bases. This enables dynamic content generation that is both contextually relevant and aligned with business objectives.
‍
The integration of RAG enhances the Synthetic Users' utility, ensuring that they serve as effective tools tailored to specific business requirements.
‍

‍
‍
‍
Digital twins try to clone individuals one-by-one. That breaks down with real audiences: combinatorial explosion, drift, privacy overhead, and brittleness outside observed data. We instead factorize (traits Ă— context Ă— knowledge), calibrate the trait distribution to the organic world, sample cohorts from that calibrated space, and compose behaviors that generalize and update instantly when any factor changes. For the deeper dive, read Synthetic Users vs Digital Twins:
https://www.syntheticusers.com/science-posts/synthetic-users-vs-digital-twins
‍
‍
For more on bias read this.
The quality we are known for.
This architecture underpins the generation of diverse and insightful Synthetic User interactions, supporting nuanced data analysis and decision-making processes.
This is the types of data that are ingested by Large Language Models.
‍
‍
‍
‍