How we compare interviews to ensure we improve our Synthetic Organic Parity

How do we know we are right? How do we know our Synthetic Users are as real as organic users? We compare.

At Synthetic Users we have a page stuck to our wall that simply reads: The best possible user interviews using LLMs. That’s great but how do we ensure they are indeed the best? How do we know we are right?

How we compare qualitative interviews

Step 0: Recruiting

We use various services to recruit users. Prolific.com Usertesting.com UserInterviews.com All we can say is that it’s painful. That participants don’t show up… but that this whole process is absolutely necessary for us. It’s the only way we know we’re getting better Synthetic Users.

Step 1: Running organic interviews

We used Lookback to run the interviews which are then transcribed.

In this case, the interviews aim at exploring the how primary and secondary school teachers in the UK incorporate technology into their classrooms.

Here is a zoomed out version. At the end of every interview we append the topics mentioned in that interview (effectively starting our codebook: a collection of themes/topics we can then compare with the Synthetic side).

Step 2: Identifying Common Themes

First, we identify the themes and topics that are common in both interviews.

Qualitative Analysis: We manually review the flow of conversation, use of idiomatic expressions, and overall readability. We call this glanceability and with it we are assessing wether the responses are relevant and appropriate for the questions asked, mirroring the real interview's context-responsiveness. To be honest, for the purposes of this comparison study we are initially more interested in content overlap. This is because we can always drill deeper with Synthetic Interviews if we feel the interview stays too shallow).‍Context Analysis: we determine how well the participant responds to the context provided in the interview questions. We do this through topic consistency comparing the number of topics brought up, first with these organic interviews, later with synthetic interviews.

Here’swith the manual process from a previous comparison study.

To make this more measurable we also run a Word and Phrase Frequency Analysis + N-gram Analysis where we compare the frequency of certain words or phrases to identify linguistic patterns. There is software out there like NVivo that can help you do this quicker.To perform a quantitative textual analysis we extract and count the occurrences of individual words and phrases that are relevant to the themes of both interviews. In this case we use a simple GPT. Given the length of the interviews, we focus on key terms related to their challenges, emotions, and solutions. It’s important to narrow down the criteria for parity otherwise the target is too large.

Step 3: We summarise the 8 reports into Organic Insights

If you run organic interviews on a regular basis, you are familiar with the process up to here.

Step 4: We run the same interview script, using a Custom Interview within Synthetic Users

Step 5: We then run the Synthetic Interviews.

You can see the 8 Synthetic Interviews on the right

Step 6: Quantifying Overlaps

Based on these common themes, we quantify the level of overlap. Given that these themes are quite fundamental to both interviews, they contribute significantly to the similarity score. If these were the only factors considered, the score might be quite high (around 90+), since these themes are central to both interviews.

A method to arrive at a parity score

Quantitatively calculating a parity number in this context involves subjective judgment since we're dealing with qualitative data. Here is a simple framework we use:

Thematic overlap (30%)

Depth and specificity of insights (30%)

Comprehensiveness of coverage (20%)

Qualitative alignment (20%)

In the case of this Comparison study, both sets of interviews score highly across all these criteria, with particularly strong performance in thematic overlap and qualitative alignment.

  1. Thematic Overlap: Both reports cover essentially the same themes, including the use of technology, integration challenges, support and training, impact on learning, digital equity, and future aspirations. This comprehensive thematic overlap is a strong indicator of alignment in the core areas of interest.
  2. Depth and Specificity of Insights: While the Organic report provides more specific examples and the Synthetic report offers broader insights, both approaches are complementary rather than contradictory. The Organic report's specificity enriches the Synthetic report's broader themes, making them more tangible and relatable. This complementary nature enhances the overall understanding of technology's role in education, suggesting a closer alignment than initially assessed.
  3. Comprehensiveness and Coverage: Both reports, despite their differences in presentation, contribute to a holistic understanding of the current state and future potential of educational technology. They address not only the practical aspects of technology use but also the pedagogical, ethical, and strategic considerations. This comprehensive coverage further supports a higher parity number.
  4. Qualitative Alignment: Beyond the visible topics, there's a qualitative alignment in the underlying sentiments and concerns expressed in both sets of insights. Issues like digital equity, the need for ongoing support and training, and the excitement for future technologies are universally acknowledged. This shared understanding and prioritization of key issues in education technology suggest a deeper alignment.

What have we learned from comparing interviews?‍

  1. Synthetic Users at first glance lack the depth of organic users. You and I would be surprised if this wasn’t the case. One of the learnings has been to enrich our Synthetic Users with more personal accounts and challenges (it’s in the dataset, we just had to surface it). As of late Feb 2024 when you run interviews, you’ll find them much richer with personal accounts.
  2. Follow on questions are really important. Don’t just give up at the first glance. Unlike with organic interviews, with Synthetic Users you can ask follow on questions. Drill down into areas where you feel the answers are too generalist.

c. Be specific and get specificity back. Specifying users’ qualities will provide more specific answers. Be as specific as you can be.‍ This is specially relevant if you don’t want your Synthetic Users to default to more encompassing levels of consciousness, i.e. prioritising policy over personal budget.

PRO TIP: Start with a recent study you have top of mind and compare it yourself in order to gain confidence.‍

This is what we tell all our users when they are starting out. Try it, compare it with an existing study. Put the Synthetic interviews next to the organic and see how comfortable you feel about it.