Summary

The riskiest part of most experience research studies is finding the right audience. That’s the conclusion of Lisa Lloyd, a customer success manager at UserTesting who has 15 years of research experience and has spent the last four years teaching and helping hundreds of researchers in companies big and small. Lisa points out that when the audience isn’t right, the results of a study will be inaccurate, and possibly deceptive, even if the study itself is perfectly designed. Yet most researchers put far more effort into the study design than they put into the definition of the audience and the screener questions that find that audience.

Lisa is always teaching, starting with her “quant versus qual” Zoom background and the message shirts she frequently wears. She’s currently on a campaign to change the way researchers approach audiences. She urges them to spend as much time on screener design as they put into the rest of the study, and she says they should revise those screeners frequently because the audience is always evolving.

In a wide-ranging interview, Lisa made the case for that shift in mindset, and gave practical advice on how to define your audience and craft better screeners. The topics she covered include:

  • Audiences evolve constantly, so you need to refresh your audience definitions frequently.
  • Don’t settle for a single audience definition. If you divide the audience into several sub-parts, and test all of them, you’ll have a much better chance of finding problems.
  • When screening, ask about both demographics and behavior. You’re looking for a person, not a profile.
  • Don’t settle for your first version of a screener; question and refine it.
  • Pre-test screeners by running user tests on them. Have people think out loud as they fill out the screener, and observe both whether they understand the words, and the feelings created by the questions.
  • When in doubt. do more discovery.

Q. Tell me about your role and background, and why you’re so passionate about picking the right audience for a study. 

A. I work as a customer success manager, but I am a researcher at the core. I’m a sociologist, and I’ve spent almost 15 years doing research. The hardest part is always finding participants and convincing them to take your study. So I’ve always kept my eye on that one. 

I’ve worked at UserTesting for four years in a couple of roles, but most importantly I’ve worked with over 400 customers at different stages of learning. In every role, I see how people determine which participants are going to provide the most relevant information. And they almost never get it right, myself included.

Most people think I spend more time helping with research questions or even research strategy. But I would say a surprisingly large amount of my time goes to discovering and challenging: Why did you ask this particular person?

Q. Any guess on how much of your time you spend on that? 

A. If I gauge it by how many customers need audience refinement, a hundred percent. If I gauge it by what I’m talking about with my customers, a good 60% of those conversations are around audiences or the impact of selecting the wrong audience. 

Q. Got it. So this is not just an important problem, this is the most important problem.

A. Yes. Even when I was studying in undergrad, this was still the problem. But in undergrad, when you’re a baby researcher, you have the Institutional Review Board, this panel of researchers correcting you every step of the way. And so you don’t even realize when you’re learning and training how big of a problem it is. When you’re learning, there’s a ton of people to correct you and help you. But our customers don’t have that typically.

The audience is always evolving

Q. So it’s not just usability research or experience research that has this problem, it’s all sorts of research?

A. Yes, all sorts of research and it’s across every vertical and every competency level. Whether they’re a newbie or an expert, there is no such thing as capturing the perfect audience. They constantly need refinement because the things that impact the audience are constantly changing. Environment, socioeconomic status, gender preferences, job, all of those things that impact who we are constantly shift. And so when you think you got the audience right, something else changes. And how do you know when that change is important enough to redefine the audience? 

Q. Are you seeing more of this problem recently? Or is this perennial? 

A. It’s always been this way. The problem was always there and I think the problem is always going to exist because that’s really the crux of being a researcher. But I think I’m seeing it more now because way back then, there was a lot of gatekeeping. You had to have certain credentials to be considered a researcher. Now, because the world has gotten more accessible, you have people using research terms, and discussing research assets in regular conversation.

I remember years ago first learning of the spectrum of sexuality. I was doing gender studies and I would come home and talk about it and nobody knew what I was talking about. Back then it wasn’t in common conversation. You were either in that field and you knew it existed, or you didn’t know anything about it. Now today, thanks to social media, internet access, broadband connection, you have people learning bits and pieces of things that were typically kept in the scientific bubble.

So the problem is always gonna be there. It’s always gonna persist. But yes, more people are talking about it now, especially because they’ve since learned that diversity is profitable. There have been protests worldwide on identity and freedom and accessibility. And so people are more interested in getting it right. Because when you get it wrong, there’s a possibility that the whole world will find out about it. So the problem always exists, but the visibility has increased and the risk of getting it wrong has also increased. 

The damage you do when you get the audience wrong

Q. So what’s the impact? When they get it wrong, what does it do to the companies that you’re dealing with? 

A. One, they lose money; tons of money. And that’s what they care about.

The second thing is, it can be potentially devastating for the business’s brand and reputation. We’re going to see large companies survive. I can’t say any names ‘cause they’re all my customers at some point, but you’ve seen these big brands get slammed in the media for insensitive content, or discriminatory content, or products that are not inclusive.

Those big companies will still survive, right? They’re gonna be on ice for a couple days, as long as they’re in the news cycle. But they have enough financial cushion to survive a decline in sales. They’re gonna be bigger than a mistake. 

Smaller companies don’t have those luxuries. So not only will they lose money, they might actually go out of business. Their brand and reputation is tarnished. And then on top of all of that, you have the public scrutiny. So even if this company went on to do amazing things in the future, somebody’s gonna pull up a tweet from 2022, remember when you said this, or remember when you put this product out? And so there is this shaming, this scarlet letter of your mistake that will follow you. 

Third, you isolate your consumers. There’s a relationship that consumers have with companies. We saw this during the height of Covid: people want to patronize businesses that they believe are good to their employees and other humans. Like, if you were not taking covid protocols seriously or you weren’t protecting the health of your employees, people chose not to shop with you. 

So beyond just the money loss, the public shame, the scrutiny, it also creates this huge disconnection and breaks down the trust between the customer and the business. 

Q. So to be sure I’m understanding the linkage here, you get the audience wrong and it skews the research that you’re doing. You believe stuff about the marketplace that isn’t true. You miss issues. Tell me how that linkage happens from the audience to the impacts on companies. 

A. It’s a couple of divergent paths. Get the audience wrong, and you get bad information that is then deemed as truth and good information. You make business decisions based on it. You create products around it, you create systems around it. You create strategies. You hire people based on that information. You fire people based on that information.

The other thing that it does is it can create this perfect storm where you push out the wrong thing because you didn’t even know that this was a bad idea. You didn’t even see the hidden dangers. 

We see it all the time around ethnic celebrations. You think you’re doing the right thing because you are celebrating this ethnicity that is typically overlooked. But because you didn’t get the right people in the room, you were off in your tone or you were insensitive. It was this hidden danger that was hiding under the good that you were trying to do. That one hurts the most, because you really tried, you just had the wrong people in the room.

One of the things that also happens with the wrong audience is that you view groups as monoliths. If you’re not using enough audiences, then you are assigning a behavior or a thought to an entire group of people and building around that when in actuality, maybe only a small segment of that group might have found whatever you built valuable.

Q. Can you think of any examples? 

A. I like to look at fashion a lot. Several years ago there was a photo shoot and it featured a little Black child, a boy, and the shirt that he was wearing said: coolest monkey in the jungle. 

Kids’ clothes are so cute. They’re so adorable. Kids are often compared to other cute things like little animals. And so, yes, I can see how they got there. But the racial and ethnic context of comparing Black people to monkeys is where that misstep happened. Now it could have been a totally different thing had that shirt been on a white child or an Asian child  or a Latin child. But it’s more than just testing the copy on the shirt. It’s more than just testing with a couple people in an audience because one Black person in a group of five is possibly not going to share the global impact response. That one person might not have seen anything wrong where someone else would’ve seen it. But you didn’t test enough people from these different groups. So the company was using something that we see all the time on kids’ clothes, but on the wrong body with the wrong cultural context it blows up.

That’s the hidden danger. They weren’t even trying to be discriminatory. On the other hand, you’ll see some other instances where it’s completely blatant, and it’s completely disrespectful to a racial or ethnic group. We’ve seen fashions think that they’re being edgy by putting models in blackface, or by having very explicit themes modeled by young children. So you’ve seen more blatant acts of trying to use outrage to drive attention or virality.

But in the example earlier, more than likely, no one was trying to be discriminatory. No one was trying to be blatantly disrespectful but no one was as intentional about audiences as they should have been. 

It’s always heartbreaking…I look at their tests and they will have tested almost every piece. It was just that they missed the audience. They did the research, they’re not lying. They tested it, but it wasn’t with the most relevant and insightful audience. 

It’s always heartbreaking, especially when it’s our customers. If I see them in the news, I look at their tests and they will have tested almost every piece. It was just that they missed the audience. They did the research, they’re not lying. They tested it, but it wasn’t with the most relevant and insightful audience. 

How do you know when your audience is wrong?

Q. To me, the scariest thing about what you’re saying is that I could have this problem without realizing it. It’s like one of those medical commercials where they say your symptoms may be minor and you may not realize how important it is to see your doctor. So Lisa, what are the symptoms? How do I know when I’ve got this problem? How do I make sure I’m avoiding it? 

A. We all must go in knowing that we have that problem. It’s there, it’s like air. We’re living it, we’re breathing it, we’re exhaling it, it’s everywhere. 

One of the ways to get the right audience is a hiring thing. You may not always reach or find the audience on the research tool, but if you have a diverse workforce, someone might catch it. And it’s not just that they say it and catch it, it’s that you have to value it enough to pause and consider what they’re saying. So that’s the heavier lift. Let’s get a more diverse workforce. And in that diverse workforce, let’s make diverse people decision-makers as well.

The day-to-day cure, the one that we can probably do tomorrow, is running multiple audiences. Sometimes I see customers and they run one audience: “I wanna talk to women,” and that’s it. Send this out to women and get back five to ten or 15 women. And they’re like, “I did my research.” 

Well, yeah. But what kind of women were you looking for? Let’s go beyond ethnicity. Were you looking for women of a certain size? You can’t say women love your clothes when you don’t even know the size of the women you researched. You could potentially have 10 women contributors who are size four, and your line has to expand to size 26. And so you’re missing this huge segment of people who are supposed to benefit from your products that are not showing up in your research.

Are you talking to tall women? Women who’ve just had babies? Women who have bigger breasts, women who have bigger hips? The way to do it is to ask “why?” five times. Why am I asking this woman? Which woman? Was she tall? What else? Keep asking descriptive questions to get close to the audience.

You also need to be both descriptive and behavioral. I might be tall, I might be thick, I might be a size 18. Okay. That’s the woman you were looking for, but you’re making workout clothes, and I don’t work out. So it’s more than the aesthetic, which is what I think a lot of people default to because they want their contributors to look the part. We got one Black girl, we got one Latin girl, we got one Asian girl, we got one Middle Eastern girl. Okay, great, but what are the behaviors now? Did you ask if I work out, did you ask what kind of workouts I do? Did you ask how often I work out? Did you ask if I have an injury? 

I know that is a lot. But I find that the minute you start pushing teams to have these conversations, they get more comfortable challenging what they perceived their audience to be. I’ve seen them full-on argue and realize, “that’s the audience you need to use for your decision; I need to use a different audience for mine.” And I’m like, “Yes! You two are looking for two different things and y’all are using the same audience, which is bad for both of you.” And what I really love is that as they mature they’ll say “we have this audience, but we have to update it. This is from last year.” And I’m proud that they’re recognizing that your ideas and your assumptions expire.

Aesthetically, I’m always gonna be a Black girl. I’m not tall and thick, I’m five foot two inches [1.6 meters]. I will likely always be Black and five two. But am I still in the season of working out? Am I working out, but do I have an injury now? Am I working out indoors or outdoors now? Things have changed, season shift. Am I working out because I don’t like the way I look, or have I been diagnosed with a new health issue?

As they create new products and messaging and as they refine old products and messaging, there has to be this constant exploration of who is our audience, aesthetically and behaviorally, and how has our audience changed.

Audiences have to change because people change. It’s a growing conversation all the time. Also, like your audiences have changed, your company reputation might have changed. Who’s willing to talk to you? Audiences are never static. And I know we never say never, but that’s a case where we can say never: they’re never static. They’re evolving. And the conversation should evolve as well. 

How to make a good screener

Q. As you describe this, I’m thinking the answer is probably writing a whole bunch of very subtle and well-constructed screener questions. Is that what it’s about?

A. I lean into the screener questions. Even with the demographic filters that we have, they are built on profiles that may have become outdated by the time the contributor gets the invitation to your test.

Even though they’re prompted to update it, people fall through the cracks all the time just because if you’re going through a big life change, you don’t think to update your participant profile. That may not be the first thing on your mind. So I usually have my customers practice writing screener questions. 

They often make subtle mistakes. One is they sometimes put the approved response as number one, just because that’s the one they want and they’re thinking of it. Another example is a little morbid, so bear with me. I’m a single parent, so I see it from many personal angles. Customers will typically ask, are you a parent, yes or no? And parenting, for anyone who’s done it, is not that black and white. You have parents who have lost children. You have parents whose children are 30 years old, independent, and haven’t asked for money in 10 years. What do you mean by parent? Do you mean someone who is currently taking care of a minor? Do you mean someone who is biologically a mother or father to a child? Do you mean adoptive? Do you mean foster? Are we talking about parenting because you’re the biological father? Are we talking about parenting because you are the custodial parent? We don’t really ask what we’re looking for.

Say you have a situation where you ask a person “are you a parent?” They have three kids who they don’t take care of, and your questions are around how do you start your school shopping. What would they know about it? They’re not the ones shopping for school supplies for their kids. Maybe the parent question isn’t the question to ask. 

The reason I bring that up is because I had a customer a long time ago — not at UserTesting, a long time ago — who had signed up for a pregnancy tracker, and then she miscarries. The tech never updates. Her ads still assume she’s having a baby, and they’re still sending her coupons for a baby she’ll never take home. 

And in that moment, as I’m hearing her, I’m thinking, who do we harm when we don’t update our information? When we don’t seek out the most timely and relevant information, who are we harming?

I’ve also seen this with addictions. We will ask questions that are somewhat insensitive. I had a gaming [i.e., gambling] customer and I thought they were so compassionate because in their screener they asked about gambling habits, and they had this response that said, “I am recovering.” And they didn’t push them through the test. 

And that was one of the kindest things I’ve ever seen in a screener question. Are you struggling with addiction? The easier question would’ve been, do you like to gamble? Did you gamble in the last year? You could send their life in a different direction unintentionally.

So I tell my customers a lot of their homework is building screener questions. They’ll ask for help to review a test. And I look at the test and the test questions are written very well. And then I’ll stop by the screener questions and I’m going <skeptical> yeahhhh…

One of the things that is bubbling but often overlooked is proxy relationships. Sometimes there are situations where you are trying to retrieve information for a contributor and you actually need information from their proxy counterpart. We see it a lot in caregiver situations, citizenship, immigrant situations, parents and minors, language barriers.

My favorite story is about Wi-Fi setup. A customer is launching a test on their Wi-Fi product. They were testing set up instructions, and they screened for homeowners. The question was, “have you recently purchased a home?” I’m like, “okay, I fit the criteria technically, but I don’t set up my Wi-Fi stuff.” And they’re like, “well, who does?” I say, “my daughter.” And they’re like, “oh, we didn’t think about that.” 

And so they ended up rethinking their plan. They changed their screener questions to ask who is responsible for setting up your wireless devices. 

So you would’ve gotten 10…I don’t even know. Gen x, boomers? And not one of them have ever read your setup instructions or have used it.

So it ranges from cultural disaster to technology oversight, but that just goes to show the problem is so vast. It could mean medical mishaps, it could mean cultural incongruence, it could mean technology slip ups. All because we missed asking questions and questions and questions.

Q. What I’m hearing is that we tend to focus on the study plan and questions in the study because that’s kind of the sexy core of the thing. And a lot of companies are good at producing that. But they’re failing on the screeners. That’s seen as less sexy and less critical, maybe it gets less thought, and you almost need to flip it around. You need to spend more time on the screeners and making sure they’re really right. 

A best practice could be to use pre-defined audiences. Have your most subtle smart research person, or UserTesting, work on defined audiences that everybody is going to use so that you can make sure you’ve got quality control. Does that work? How do you manage this to make sure you’re doing a good job of screening? 

A. First, if you have researchers, then you definitely want to get them in the room with your subject matter experts. You want your researchers to sit with them and interrogate them about the people who use this product.

Then the researchers build generalized templates that have these questions. And people will then see these are the types of questions to ask. You want this general template that we can say, here are the types of questions I should be asking.

Secondly, create a cadence for updates. When are you going to investigate your audience next? I particularly like quarterly, that goes with the seasons. A lot of change happens when the season shifts, but it may be different for whatever industry you’re in. You may say, I look at one audience at the beginning of the year, but I look at a totally different audience when I get into the school start dates. I gotta update that audience. The type of car I want when I’m not shuttling someone to school is very different.

If it takes you 20 minutes to create a test, 15 of those should be on the audience.

My third thing, I tell customers all the time: If it takes you 20 minutes to create a test, 15 of those should be on the audience. 75% of your test build time should be on the audience you’re testing. That is just key. 

My fourth one – I’ve seen a couple of customers do this, but not nearly as much as I would like – some customers will have a persona description that is so detailed and so rich that you can almost say, I would be friends with this person. And that persona will have a name, Emily or whatever. Those customers move faster through their research because they’re looking at behavior, they’re looking at aesthetics, constantly. So they’re not held up by it. They’re not doing research that they can’t use because it’s not the right audience. They’re moving faster because that muscle has been worked out a lot more. They ask questions, get answers fast, they ask more questions. 

Other customers, I ask “what’s your persona?” And they say “they have our credit card.” That’s information, but it’s not a persona. 

So one of the things that I really try to stress to my customers is, there are differences between criteria and audience and demographics. Demographics is just your profile information, the stuff we put in the US Census. Like I’m Black, my eyes are brown. Criteria are those other things that really make me stand out as a person who can provide relevant information. Do I have the app on my phone? How do I use the app on my phone? How long have I been a member of that app? You know, all the ways that I’m using this app that makes you say, “yeah, this is the person we want to talk to.” And then the demographics and criteria together make up the audience. 

Sometimes the simplicity of the design in our product makes people think that screeners are easy. They don’t see any indication that you should think about this again. 

But if my customer writes a screener question, I say to them, how can you make this better? Write it again. How would you make it more targeted? How would you make it more specific? And then they write it again. And they start to get to different levels of ability. 

When they first start that exercise, they may ask something like, “which of the following apps do you have on your phone?” I’m like, “you know how many apps we have on our phone that we don’t use?” So I’m pushing them to rewrite the question. And they come back and say, “which of the following apps have you used in the last month?” Okay, we’re getting stronger, keep going. Then they might say, “how often have you used the following apps?” Now we’re getting somewhere.

What to do if you don’t have a researcher to help you

Q. If I have a smart researcher at my company, they can help me with this stuff. They can think through those subtleties, construct the screeners, all that sort of stuff. What if I don’t have that researcher? What if I’m in a smaller company or my department doesn’t have that person? How do I make sure I’m dealing with these subtleties appropriately? 

A. At UserTesting a couple years ago, we were trying to understand what people want to be called in terms of their race and ethnicity in a screener. Sometimes our customers don’t realize they can do this: You can test before the test. We sent a test out and said, “Hey, how does this term make you feel? What do you think of this term? What is the story here?” 

That was one of the humbling experiences for me at UT. I was really proud that we knew when to start over. We didn’t go in testing based on terms that we knew. We said we know three terms. What if there’s four more? And then when we got that data, we learned that there were a couple more terms and then we tested those terms. 

Sometimes customers don’t know that you can do that, or they don’t think to do it. Sometimes we have to run a test to understand the context, before we launch the test to find the information we’re actually looking for. 

Test before you test

Q. Let me play this back to you then. So I’m a non-researcher at a company that doesn’t have any researchers. And we’re looking to put together a test. The first thing I’m hearing is pay attention to the screeners. Spend as much time or more time on the screeners as I do on the study itself. 

Second, look at examples of screeners that others have done. I presume I could, if I wanted to, have UserTesting Pro Services help me with that stuff if I wanted to pay for an engagement. But there are also some resources for writing screeners.

So you look at those, then you write up the screener questions that you’re gonna use and you run a separate test on those screener questions: “Here’s a list of questions. I want you to answer them and talk about the answers. Does anything confuse you? Why did you give the answer that you did?” Maybe turn on participant view so I can see your face. So I see if you’re frowning. And you’ll ask some stuff after, “how did it make you feel? Why did you give these answers?” And you’re gonna use that to inform the actual screeners that you’re gonna use for your real study. Is that right? 

A. Yeah. And it does require active listening and a lot of note taking. If you’re not a researcher, discovery is what’s gonna make you stand out. You do a little bit more discovery than a researcher. And that’s just because we researchers spent 20 years doing it. So we can skip that step sometimes.

In the UserTesting example, that’s why it was so humbling because once we realized we’re out of scope here, we went back to what we know best, which is you gotta run some discovery. You’re never too smart to do discovery. It never ends. 

But especially if you’re not a researcher, lean into discovery. Be curious. And sometimes you gotta test your screener questions by seeing, who did I get this first round?

An example of that is I met someone who’s been moderating tests for 25 years. I’m sure she’s an expert. And her challenge was, I’m not getting the people I want. And so I looked at her screener questions as we were going through the conversation. There were other people on the call and they also mentioned her screener questions. She said, “I’ve been moderating for 25 years. I know it’s not my screeners.”

And the second she said it, I thought, “look what we have here, we are responsible for our own demise.” Research moderation and crafting research questions are not the same exact skillset. Crafting screener questions, not the same exact skillset. You’re in that world, you do a little of each, but they’re not exactly the same. We have some people who are incredible moderators, and there’s another team that creates screener questions for you. And that was an eye-opener to me: How many times has this happened where customers have a particular competency in a research section and think it applies to the full breadth of research?

I’ve got some bad news. Even though you’re a great interviewer and maybe you are great at writing test plan questions, that doesn’t necessarily translate to screening. So what we described here really applies to everybody. Whether you’re a skilled researcher or not, you should be questioning and focusing on and using your methodology to work stepwise through the screener development.

Even if I look at a screener that I wrote last week I’m like, “oh, I should have changed that.” It’s like a car. 

We’ve made quite a  bit of updates to cars since the 1900s. For a hundred years cars have evolved, but generally speaking, the wheels and the steering wheel are still there. 

We change more than our products….Very few products are changing as fast as humans. 

What changes is the people. They change their circumstances, their environment changes, their income changes, their parenting status changes, their weight changes. We change more than our products. It’s the audience that’s changing. Very few products are changing as fast as humans. 

Look what happened during Covid and how much behaviors changed and how companies had to scramble to keep up. There’s a good recent example. Best Buy didn’t change their products; curbside pickup increased their sales. Did they change the layout of their store? Did they change products? Did they change their logo? Did they change the colors that they used? No, they changed their focus.

Our computers are not changing. Our phone accessories are not changing. We don’t need to test that a million times. What we need to test is the people that are using it. Nothing about the store changed. Nothing about their products changed. Everything changed about their consumers.

And because they were keeping that eye on the audience, they were able to start curbside pickup so that when many others were declining in sales because of the lockdown, Best Buy was climbing up. That’s one of the best examples, by keeping your eyes on the people you win every time. 

Photo by Toni Ferreira on Pexels

The opinions expressed in this publication are those of the authors. They do not necessarily reflect the opinions or views of UserTesting or its affiliates.