How does work? uses advanced AI and natural language processing to generate synthetic personas that can mimic real human behavior (each Synthetic User has its individual FFM personality model). Science puts is very well in this article. The user chooses which type of interview they want to run. Synthetic Users have four types of interviews available for qualitative research. When the interviews complete, you can follow up with individual users, run an insights report summarising all interviews, annotate the various interviews or simply share them with your team. Here is a high level overview of our architecture.

How accurate are the synthetic users generated by

We've been working hard to ensure the synthetic personas behave as though they were organic humans. We are constantly measuring this parity. Our success is measured by our ability to deliver that experience. Most important though is that you focus on the outcomes and less on the nature of the synthetic personas, i.e. ensure your customer base is represented but focus on the insights you are getting rather than the synthetic nature of our participants. As the term illustrates, Synthetic Users are composite creatures put together in the bowels of a neural network with billions of parameters (depending on the foundation model used).

Try inputing a study that you have top of mind and compare it with the Synthetic Users outcome. Very quickly you'll be able to tell how far off we are. If you feel the first pass is generalist, probe deeper. Unlike with Organic Users, you can do as many follow up questions as you wish.

How does compare to traditional user research methods?

Synthetic Users has been designed to act as a discovery co-pilot for continuous insight. It accelerates an otherwise expensive and operationally taxing process. It does so by inverting the process, i.e. you first run Synthetic Users, get most of the problem space you are exploring covered, fine tune your questions and then spend less of your budget and time on organic interviews.

How does handle diversity and representation of different user groups?

You decide your audience, the type of synthetic user or persona you want to interview by how accurately you define it. You may wish to run a discovery process with a very niche audience or a more heterogeneous bunch. You decide.

How does ensure the results are unbiased?

Biases have served us humans for hundreds of thousands of years as a way to accelerate learning. What is important is that we reveal the biases as parameters, when generating Synthetic Users, and allow our users the ability to change those in order to get the best insights. With real interviews you can gauge biases and write that in your notes. With Synthetic Users are able to do the same.

How does handle privacy and data security?

For most of customers we currently use Open AI GPT 4 to power our service.  Open AI are not using any of the data you input into Synthetic Users for training their models. Your data is private and belongs to you alone. Check out our Terms of Service and DPA in case you need more detail. Note that for some of our customers we run bespoke models, sometimes on premise. If you require that get in touch.

How do you handle the generation and management of synthetic user identities to prevent duplication or overlap?

To prevent duplication, we work to provide a diverse set of synthetic users within the defined profile. While some overlap may occur in separate studies, it's very rare.

Are there multiple variations within a single test with synthetic users, or do you use one amalgamated synthetic user profile to respond to the questions?

You can generate up to 10 diverse synthetic users for each study (at the time of this writing April 2024) to capture all the diversity within the segment. This helps avoid stereotypical answers and gives you genuine insights.