If you’re new to online human insight, it’s easy to be confused by the range of tools and techniques available to you. How do you know which ones to use, for which problems? The most common question we hear is about the choice between self-guided and moderator-guided tests, also called unmoderated and moderated.

Many professional researchers have strong opinions about which method is best, but in our experience you need to use both. Each has strengths and weaknesses, and you can get better results if you’re thoughtful about when you use them and how you mix them together. Lisa Lloyd is an expert on those tradeoffs. She’s a customer success manager at UserTesting, and has run hundreds of training sessions with human insight users. In this article, we give you an overview of the basics of the two methods, and then we talk with Lisa about how to use them together to get better insights and avoid biases.

What’s the difference between self-guided and moderator-guided tests?

Moderator-guided tests (also called moderated tests) are built around live interviews: an interviewer and participant interact in real time. In traditional market research, the gold standard interview is conducted in person, in a facility where the participant can be recorded and observed. Scheduling and recruiting those interviews can be an incredibly tedious and slow process. Today, a good human insight system will automate the recruiting and scheduling process, and the interviews will be conducted and recorded online, saving you a lot of time and money. During the online interview, a good human insight system will let you share your screen with the participant, and let them share their screen with you. 

Self-guided tests (also called unmoderated tests) have no interviewer. Instead, the participant uses their computer or smartphone to view questions and/or tasks on screen, and responds to them. A good human insight system will automatically recruit participants for you, display the questions and tasks, and record the test. The video should include the participant’s face, screen, and backside camera (for real-world tasks). 

A good human insight system will let you do both types of tests. It should also make it easy to analyze and share findings from the tests. Features to look for include:

  • Automatic transcription
  • Easy creation of clips by selecting text in the transcript
  • AI-driven sentiment analysis to help you identify insights
  • Easy sharing of clips and highlight reels to communicate findings

Advantages of moderator-guided tests

Flexibility. Because the interviewer is present throughout the test, the questions and subjects can be adjusted in real time based on the participant’s reactions. If they’re confused, the interviewer can stop and explain things. If the participant raises an interesting subject, the interviewer can ask followup questions about it.

Personal rapport. A good interviewer can create a personal connection with the participant, encouraging them to open up in ways they might not do otherwise.

Moderator-guided tests are better when:

  • You are not sure which questions to ask, or you need to explore a topic in an unstructured way
  • The participant is likely to become confused by the test. For example, people are often confused by early stage prototypes and sketches of software screens, because they do not behave like a finished product. A live interviewer can correct areas of confusion that could cause a self-guided test to go off track.
  • You feel that a live interviewer can navigate a delicate issue better than a written set of questions

Advantages of self-guided tests

Speed and convenience. Because a human being doesn’t need to be online with the participant, self-guided tests are extremely fast and easy to run. All of the sessions can be conducted at the same time, and you can use online tools to rapidly review the results. Since you don’t have to be online with each participant, you save a lot of time on your calendar. The net result is a much faster process. You can often get initial results from a self-guided test within hours, something that usually takes days for moderator-guided tests.

Candor. We find that in some cases, people will be more candid and relaxed when talking to their computer or phone than they are when talking to another human being. For more details, see our article on the use of self-interviews in needs discovery.

Less risk of unintentional bias. Sometimes participants in a live interviewer will unconsciously try to please the interviewer by modifying their responses. There are also other unconscious cues, inhibitions, and expectations that can affect people when they’re talking face to face. A self-guided test can reduce those biases. We discuss that with Lisa below.

Self-guided tests are better when:

  • The questions and tasks are well defined and unlikely to confuse the participants. For example, a test of a high-fidelity prototype or a finished website or app.
  • You believe participants and/or interviewers might be uncomfortable discussing a topic with a live moderator
  • You believe the presence of a live interviewer might skew the results
  • A skilled interviewer is not available
  • You need to move as quickly as possible

Lisa Lloyd on the tradeoffs between self-guided and moderator-guided tests

Lisa Lloyd

Q. Lisa, thanks for joining us. How often are human insight users confused about self-guided and moderator-guided tests?

A. Very often. It’s hard for them to understand why to use which one and when. I try to make it easy for them. We have a lot of written material on the level of depth you can do with each technique and the time savings. But we also need to talk about intimidation and trust. Some interviewers feel intimidated about doing a live interview, because they do not know how to do it. And there may be problems with conscious or unconscious bias between participants and interviewers.

Q. What do you mean by bias?

A. A live interview is a good methodology, but it creates a risk of two-way bias. By bias I mean if I am doing a live conversation with someone, I have my own bias based on who pops up on screen. I do not even know they exist before that, and I may have immediate conscious or unconscious reactions to them. The two-way part is the contributor’s bias toward me when I pop up. 

Q. Can you give me an example?

A: Imagine a white woman leading a live conversation on hair products compared to a Black woman leading that same conversation. The Black woman and the white woman could ask the same questions, but if they talked to me they would get two different sets of answers. Since I’m a Black woman myself, I have shared experiences with the Black woman that I don’t have with the white woman. The Black woman will likely get much more data from me. It’s not about one interviewer being better than the other; it’s the power of familiarity due to shared experiences that allow us to get into a much richer conversation.

Q. I understand. That’s a fairly specific situation around race.

A. No, it’s not only race. I think there’s actually a very, very high rate of inaccuracy in live interviews because of all of these things that live in our bodies, including gender, age, background, etc. We do not talk about that enough. 

You see it in many areas: beauty, education, law enforcement. For example, a woman will not get into the nitty gritty of her menstrual cycle with a man. (Editor’s note: The problem runs both ways. One of the leading cycle-tracking apps used self-guided tests to get feedback from users because the company’s male product managers were uncomfortable speaking directly to women about the subject.) 

People will say one thing self-guided and another with an interviewer. This happened during a project with a company that provided birth control without a prescription. The data from a live interview had significantly less detailed information and context than the unmoderated version. Participants spoke more openly about their health challenges and experiences during the self-guided test than they did in the live conversation. 

Another example is the language barrier. I recently interviewed someone with an Indian accent. I am Caribbean, so I am very aware of accents and I hate to see people dismissed because of their accents. But also I get to do this for work, so I can take the time to decipher what someone is trying to say. Oftentimes, UserTesting customers have asked us to replace tests because they didn’t understand the contributor’s accent. Couple that language barrier with the small window of time most interviewers have, and tension builds. The interviewer is not usually a trained researcher who has learned how to quickly navigate the distracting nuances that may happen in live interviews. They rush through the conversation or become disinterested and exasperated, because they do not want to repeat themselves. It tests their patience. 

We always say you can feel when the food is made with love. When I see some research, I can tell it was rushed and they were not having a good time. That is an issue. One of the biggest challenges with live interviews is the accents and dialects. If the interviewers can’t easily  understand what the participant is saying, they may miss important details or dismiss what is said. They naturally tend to focus on the people they can communicate with easily.

Q. Okay, so what can we do about those biases?

A. I think the best way to minimize the bias and inaccuracies is to follow moderator-guided tests with self-guided tests. You’re much less likely to trigger social norms in a self-guided test. Do the live interviews and then run the same questions self-guided and see if the answers stay the same across race, sex, gender, etc. 

A mixed method approach is the most common recommendation we make because a singular method can be inadequate on its own. You get good data but it’s muddy.  You’ll get more insight when you pair it with the unmoderated test. The goal is to have the most accurate information to best inform your next step.

This is a really big conversation for all of us to have.

Q: It sounds like a lot of work to do both moderator-guided and self-guided tests for every issue. Do you have any guidance on when it’s most important to do both? And when is it OK to skip the live interview?

This is my favorite question because it really can be a lot of work, especially for people without research expertise. It’s most important to do both at key intervals of your process: the beginning and the end; when you’re doing discovery and to confirm that what you learned in discovery can improve the customer and/or user experience. 

At UserTesting, the platform is shaped with that in mind. We help our customers write scripts for their live interviews that guide the discussion and ensure that key questions are asked. Once you have that script, it also doubles as a test plan template. With a little tweaking to adjust for the contributor’s autonomy during an unmoderated test, those same questions can be saved as a template for you to reuse and edit as many times as possible. This serves as a foundational asset that can save you time in the future and act as version control too.

Q: How do you see this affecting the companies you work with?

A: We are seeing a constant increase in online sales and digital experiences. All of these experiences replace human interaction by integrating tech. If you study them using live interviews only, you risk passing the bias into the product. It’s like germs that get seeded into it.

We have to let companies know that it’s OK to do a live interview for discovery, and depth, and a more guided approach but without self-guided tests, they’re missing out on the opportunity to gain a significant amount of confidence. People always ask about confidence in terms of statistical significance; how much can they trust numerical data. But this is another form of confidence, and it’s just as important. 

I would say, how do we test in a way that accounts for social norms but also increases your confidence in the data? We can’t get rid of social norms, but we can reduce the bias they cause with intentional guardrails built into our testing process.

People ask me, do I need to do both? Yeah, you do. Because you are in your body with your own set of experiences that shape your reality in very unique ways. It never hurts to “double check” your work by supplementing a live conversation with an unmoderated test.

If you want to learn more

Here are some good references:

UserTesting on moderated vs. unmoderated tests.

Nielsen Norman Group on unmoderated tests.

Pointers from UserTesting on how to do an online interview.

The interviewer effect. There’s been a lot of market research on the effect that interviewers can have on an interview. Unfortunately, many of the articles were published in journals and are behind firewalls. There’s a good, although brief, overview on Wikipedia, with some references.

The evaluator effect. In usability testing, there has been a lot of research on the ways that different evaluators can skew conclusions about usability. Here’s a good article from MeasuringU (warning, parts of it are pretty technical).

Photo by LinkedIn Sales Solutions on Unsplash

The opinions expressed in this publication are those of the authors. They do not necessarily reflect the opinions or views of UserTesting or its affiliates.