Summary

The speed of marketing in the digital age is a double-edged sword. We create messages and programs quickly, and deploy them instantly, but that often means we don’t have time to check our work with customers. The majority of marketing deliverables are not validated with customers in any way before they are released (link).

This “spray and pray” approach creates huge risks for marketing teams. Studies by the optimization community have shown that our guesses about customer reactions are wrong 66% to 90% of the time (link). We can use analytics after the fact to solve the worst problems, but by then marketing that doesn’t resonate has already wasted a lot of money and damaged the brand’s reputation.

How do we improve quality without slowing down the marketing process? Savvy marketers are using fast experience tests to vet and optimize their work before it launches. This is helping marketers in diverse roles including content, creative, optimization, project management, and product marketing. Fast experience tests let these marketers move quickly with high confidence that their work will resonate with customers. They prevent bad experiences before they happen, increase conversions, reduce operating costs, and create more loyal customers.

How fast feedback improves marketing

We interviewed five marketing leaders in UserTesting to understand how they incorporate experience tests into their work. Their most common use cases include:

  • Message and creative validation. Jahvita Rastafari and Lara White use experience tests to validate messaging before it goes to creative, and then again at each stage of development. This ensures that problems are caught as soon as possible, when they’re easier to fix, and ensures that the finished work will resonate.
  • Competitive analysis and brand perception. Tom Valentin and Meg Emory use experience tests to compare customer experience across different brands, in order to identify strengths and weaknesses in competitors’ strategies. This helps them find ways to enhance brand perception and mitigate risks.
  • Optimization of user journeys and e-commerce flows. LeTísha Shaw tests various elements of the user journey, from initial message comprehension to navigating the purchase process. Her team identified and removed unnecessary steps in a new ecommerce flow, improving the overall user experience and increasing the likelihood of successful transactions. Similarly, Meg Emory utilizes user tests to understand how users interact with different elements of the site and adjust designs accordingly, ensuring that landing pages and navigation structures are intuitive and engaging.

Here are their stories…

The content creator: Lara White 

Lara is senior director of digital marketing at UserTesting. Her team manages the company’s organic social media channels, paid search and social ads, SEO, and content marketing. Lara was also a leader in the “UT at UT” program, which enabled broader use of user tests within the company’s workflows.

Test content both reactively and proactively

MM: Within marketing, what are the problems you think experience testing should be applied to? What are the dilemmas or the situations that experience tests can help to resolve?

LW: The tests I’ve seen the most are more reactive, like we see something not performing well or not the way we would have expected. We suspect there’s something going wrong here, and we want to try to understand why. So with UserTesting we take someone through that experience or expose them to the thing that’s not performing, to hopefully give some insights. 

Proactive testing. LW: And then the second use is more proactive: being able to quickly and confidently put out content, essentially create something that is for a target audience, that you feel confident is going to resonate with them.   

The problem that solves is getting ahead of potentially putting something out, hoping for the best, and then it not performing. And also to be able to confidently get alignment, or get people to rally around what it is you believe you should do. So you can show this isn’t just something I thought of one day, there’s actually some evidence behind it.

Use test results to rally people behind a concept

MM: How often are you in that situation where you need to rally people? And when I say you, I mean a marketer with your experience. How often do you have to persuade people that it’s the right thing to do?

LW: It’s interesting. I feel like it comes in waves. My team creates content that we think of ourselves, like we have a content strategy that we drive, but we also partner with other folks who come to us and say, “I want to work with you on creating this piece of content because it’s going to be part of driving a program that I’m doing.” We put together an outline and test it, and then go back and tell them, “Hey, I know this is what you asked for, but we tested it. And it’s going to fall flat.” It’s those circumstances where we need to get people on board with our recommendation to do something that’s different than what is being asked for.

It’s an ad hoc thing that seems to come up every now and then.

When is it time for a reactive test?

MM: Let’s talk about being reactive. Say something’s not performing well. How much time do you have to figure out what’s wrong and figure out what to fix? Is that usually a day, or is it a week or a month?

LW: There isn’t usually some sort of deadline. It’s more so like a puzzle where, if we can fix it, that’s good for the business. So the faster we can figure it out the better.   

MM: It sounds like the situation in my backyard if there’s a sprinkler that isn’t working. It’s not like I have to fix it in that minute, but if I don’t fix it in a few days, the plants will start dying. That’s an interesting contrast to a product team where they’ve got these agile cycles they have to hit, so if you can get something anytime within a sprint, that’s okay. But if it goes beyond the edge of the sprint, that’s a big issue. It sounds like in marketing, there’s a steady accumulation of pressure. You want to move fast, but there isn’t always a rigid deadline. 

LW: It’s different with external requestors. The other factor is the overall timelines that we agreed to. We build in time for the testing as part of that. If we’re doing a work-back schedule, we would build in the time for the testing. Frankly, though, testing is so fast that it’s not like we need to build in a ton of extra buffer time. It’s more that we need to make sure that the step is in there so we remember to do it. It doesn’t really make anything go slower.

Get a champion and an executive sponsor

MM: So it sounds like you have formally built into your program that you’re going to do some testing. How has that worked out?   

LW: Frankly, we haven’t yet built it into the habit of the team in the way that I would have hoped. That’s on me ultimately as a manager. But I know exactly what happened. One, we had a champion on our team who is currently on leave. She was my support, championing that you’ve got to test it, or did we test it, or why didn’t we? And without that voice, it’s not part of everything every time the way it was when she was championing it. 

And the other piece is that we relied too much on the push from our executive to incorporate this into everything we were doing. It was in her annual goals, and we needed to report on it in every all-hands. So people thought, “Oh, shoot! I’ve got to do this, because when she asks, I must have something to show her.” I’m just being honest, that’s the reality. Over time there’s been a bit less pressure and that has translated into it not always being the priority I know it should be. Combine that with the individual contributor champion going on leave, and I realized that we were really dependent on them pushing us. It hasn’t been as ingrained. We still do it, but it’s a bit more ad hoc and less like it’s just part of our process.

The lesson is, you need more than one champion at each level. 

How to make testing a habit

MM: That’s interesting. People had compelling results with the tests, but that didn’t necessarily drive repeat tests. Is something missing in the process? 

LW: I think even when something is compelling, it still needs to become a habit, especially when we’re trying to move quite quickly. For example, I hired a new employee, and despite all the training we did for him about our system, it wasn’t until the first time he ran a test himself and saw the results and the insights, that you could see the light bulb went on for him, like “holy cow, something about doing it myself clicked for me.” But that wasn’t enough. He had that moment of clarity, and then quickly moved on to the next thing. 

Create repeated “holy cow” moments. LW: We need to force people to have those moments more than a handful of times to get to the point where they literally feel like they cannot do their job without this, which is where people eventually get to. I’m addicted to this: if I went and got a new job somewhere else, and they didn’t have testing, I wouldn’t know how to do my job. 

MM: I think there’s something to what you said about habit. If I think about other things in my life that are really useful — for example, stuff you can do with AI – I know what you can do with it, but do I think to apply it to all the situations where it could help me? Nope, I don’t. So I think there’s probably a best practice in there about repetition. You do have cases of people who have gotten into that habit, and it’s now indispensable for them. So there is a threshold you can get past.

When and how to test in the content development process

Test the outline. LW: There are key points where we do the testing in content development. The first one is outline / concept testing, which might also include looking at other companies’ content that was maybe serving as inspiration. So we put an idea in front of our target audience and ask, “Hey, does this resonate? Why or why not?” We don’t ask the questions in as leading a manner as that, but basically, we’re trying to understand if what we have in mind resonates with this target audience, yes or no. And then usually what comes out of that is either yes, move forward; or no, try again. And if it’s try again, then we’ll do another set of testing until we get to the yes.

The other thing that interestingly comes out of that is a lot of times without us even asking, participants will talk more about what they would expect from the content. One of our employees has this really great quote where she talks about how essentially the content writes itself because she just takes all the things that the participants tell her that she didn’t even ask about. And then that helps inform her when she fleshes it out. 

Test the first draft. LW: So that’s the first step. And then the content is drafted. When we think we’re at final copy, but it’s not gone into design yet, we test again. That’s a little tricky, because if it’s really long, you don’t want to sit for half an hour and watch someone read it, but we usually will ask them to scan it and ask, does anything pop out that really resonates or that doesn’t quite fit? That’s our chance to see if anything got lost in translation between the outline that we had validated or the concept, and then the actual writing of the piece. That’s our opportunity to correct it before it goes to design

That’s less of an issue for us now, because we have more control over the formatting and the design as we’ve moved from PDFs to web-based documents. In the past, we had to hand off a piece of content to the design team, and they would design it, and then, if we needed to change it, it’d be a real pain to go back to them and say “we tested it, we need to change these things.” That would drive them crazy.

Right now, because we have more control of building it ourselves, sometimes we skip that step that I just described, and we just go straight to build and then test the final asset, which has some value, because when people can see something in context, I think we get more helpful feedback. 

Test the final asset. LW: So then, the last piece would be: it’s built, it’s about to go live, and we have one last chance to get it in front of people. Was there anything that we missed? Or is there anything that now that it’s in context, we need to reword, or is the image confusing, or that kind of thing. 

So those are the gates. But if we do nothing else, the first one is the most important and so I try to stay on top of that one as much as I can: before we move past outline stage, let’s get a pulse on it. And then I feel more comfortable about moving forward with building it out, even if we don’t do any other testing.

The most important test: Get a pulse on content at the outline stage

MM: So, in terms of building out best practices, I am hearing that you need a tactical champion who’s reminding everybody, and you need your exec champion to be pushing on it. You need to repeat that for a while. And the most important test is at the outline stage.

How to explore customer needs

MM: When I talk to product teams, one of the big things that they want to use UT for is needs discovery, understanding how customers think and feel in general, without reference to any particular deliverable. Do you have the equivalent of that?

Create a direct connection between the team and customers. LW: Yeah, that reminds me of another use case that we do. For the most part, we are getting some kind of starting point of messaging and positioning from product marketing, so we’re not usually starting from scratch. And we’re kind of relying on that. But once a quarter, my team does a live conversation with a member of our target audience. We take turns being the interviewer, and it’s literally just to get to know them. Tell us about your day, what are your challenges? What drives you crazy? What do you love? And then, what technology do you use? How do you understand if you’re succeeding? How are you measured? And then as part of that, we’ll often ask them what kind of content they consume. Are you on social media only for personal use or are you also using it to learn about stuff related to your job? So it’s just getting to know the persona. 

We don’t often have concrete, actionable things that come out of that. But what I do notice is that it really seems to energize the team to have that direct connection. I think there’s value in that, even though we don’t necessarily have any actions that come out of it.

MM: Do you do anything with those interviews afterwards? Like, do you capture video that you share around? Does it get archived, or anything like that? Or is it more in the moment everybody watches it together, and then we’ve learned, and we move on?

LW: It is recorded and ad hoc we clip and share with people who weren’t there. Before we realized that transcripts could do it for us, we used to have someone that was a note taker that was capturing the key points. But we didn’t systematically revisit the videos. It’s more ad hoc. Months later we’d be talking about something else, and we’d say, hey, remember we did that live conversation and the person talked about a problem that was kind of like what we’re trying to solve for now? So it more just becomes part of our shared history of things.

How to test successfully with minimal researcher support

MM: Do you have any professional researchers helping with this, or are you guys just doing it all on your own?

LW: Right now we do it on our own. When we first started doing it we included a formal onboarding. That included working sessions and brainstorming. We came with our use cases and our researchers talked through how we might tackle those 

MM: If you’ve got no professional researchers involved, how do you know that the research is being done properly?

Basic best practices are good enough for simple tests. LW: As part of the onboarding we had, we talked about best practices. And I think for me when I’m looking at what my team is testing, I feel we have some awareness of basic best practices that we should follow. The reason I’m not super concerned about that is because we don’t actually ask a lot of questions. Normally, we’re asking for people’s first impressions. So what I impress on my team is just let’s make sure we’re not asking the question in any leading kind of way. 

I’m on the lookout for people either asking leading questions or pulling together the insights in a way where it’s clear that they’ve just done this to validate what they wanted to do anyway. I feel like I know enough to be able to catch that.

MM: It sounds like it’s almost templated, like there are certain standard questions you’re going to ask people, and it’s going to be pretty similar from test to test. As long as you know that you’ve got the right set of questions, you’re just dropping stuff in. Is that fair?

LW: Yeah, for sure.

MM: Okay, what about participant groups? Have you also defined preselected audiences for your tests?

LW: We tend to rely on saved audiences quite a bit. We need to define our target audience for whatever we’re creating anyway, so if it’s not a saved audience, then we make sure we’re getting the right people to ask.

Test emails and customer flows: Make sure customers will do what you want

MM: What other elements of marketing can be strongly helped by user testing? 

LW: Email, both on the prospect and the customer side. It was impactful because we could get insight into what was behind the numbers we were seeing in open and click through rates. If we find an email that gets more opens, we could try to guess why. Or we could actually hear from people what it is that made them open the email. And then we can do more of that 

To take it to a higher level, user tests have a ton of value whenever you’re trying to get someone to do something. So it’s like, open the email, click on the link. We want to make sure that the content is really strong, but also what would you do next when you read this title? Do you want to know more, are you going to register for a webinar, something like that. That’s where I’ve noticed that the testing seems to have impact, and other teams have seen that as well.

User tests can help you optimize when your analytics aren’t robust

MM: To me, the neat thing about email is that it’s an instance of pairing it with analytics. What do you think about that? 

LW: I love the concept, but what’s tricky is that many marketing teams don’t have as clean and straightforward access to analytics as the ideal would be. I don’t think that’s unusual. When we talk about layering in the human insight on top of your analytics, we’re making an assumption that the analytics side of the house is good. Often the analytics team is working as hard as they can, but they’re dealing with a lot of challenges, and sometimes the marketing content team has a really hard time just getting their hands on the data they want. So in some ways the user testing insights are a way to inform what you’re doing in place of analytics. 

Experience tests expand the value of a marketer

MM: Does adding UserTesting into a marketing team change anybody’s job role? Or is it more just a matter of changing their work habits?

LW: I think it can change the job role. As an example, let’s talk about our writers. Testing creates an expectation that they will be experts in our target audience. It’s their job to understand them in a way that I think wouldn’t be the case if we didn’t have testing available to us. Without user tests, they would get the messaging and positioning from product marketing, and then our job is just to write it. There’s no expectation other than to be a really good marketing writer. Whereas I think, layering in user tests, there’s an expectation that we also have an understanding of our audience, beyond what we read in the field playbook. It’s part of their job to do that.

MM: This taps into a goal that a lot of executives have: How do I make all my employees more customer savvy? They don’t want people to just be copywriters, they want them to be empathetic customer-expert copywriters.

The creative lead: Jahvita Rastafari

Jahvita works in UserTesting’s corporate marketing team, where she is senior director of brand marketing—leading all things brand and creative.

Test the messaging before you work on the creative

MM: Talk to me about how you’ve used UT in your creative work. What problems were you trying to solve? And how did it turn out?

Decouple the messaging from the creative. JR:  We’ve used it quite a bit. Given the volume that our team has to go through, it definitely doesn’t get used at every stage or for every project, but for me, especially on the brand side, anytime that we’re going out with a larger scale brand campaign, like “Here’s the messaging for the year,” my process always is that I like to decouple messaging from creative. 

Let’s say I kind of have a hunch on the right message, and these are the two or three directions that we might want to use. I’ll typically use UserTesting to get reactions for those. 

What’s really hard for designers is a lot of times marketing teams just say, “I want this thing and make it happen,” but they don’t have the content. And if the content changes later, the creative could look really different. So that’s why we like to check the messaging first, instead of just going in with the assumption that this is the message we want to use, and then when we get into concept we go a completely different direction. I have learned the hard way that this is the better way to go. 

Testing the messaging separately also equips me in conversations with my management or other people. They will be curious about what resonated well or why you went that way and not this way. So we’re able to share that. 

Bring the test results to the creative team, along with the messaging. JR: Then I bring it to the creative team and say, “Okay here’s what we heard. Here’s the messaging that resonates most. Here’s why it resonates most, here are the highlight reels.” And then we usually go into a creative brainstorm and come up with concepts. We’ll usually narrow it down to two creative concepts. And we’ll use UserTesting to go back to that same pool of participants and say, “Okay, you gave feedback on this. And now we’ve put it in situ. Like if this were out on a billboard, would it resonate with you?”

That’s our content-to-creative process. 

You’re building an internal creative brief. MM: When I talk with people in ad agencies, their process usually starts with doing a creative brief which summarizes everything you’ve learned about customers and messages. And then you take it over to the creative folks and you have them work. The process you described almost sounded like you’re doing the equivalent of a creative brief. Is that fair?

JR: I think that’s fair. We look at ourselves as an internal creative agency. But what’s different is that we drive programs and we also support programs. I would say we test most often when it’s something we’re driving, because sometimes the other marketing teams need something immediately and we have to trust that they did the research.

MM: Let me repeat that back. You’ve got two modes of working. One is service group: “I need you to do this for me.” In this model, it’s the responsibility of the requesting team to make sure they’re doing their research beforehand, so they should be giving you a brief on what they need and also make sure afterwards that it’s been verified. You’re a step in their process. 

Whereas when you own the project then you build the research into it.

JR: Exactly. That’s a really good way of putting it.

The process for testing creative

MM: So when you’re doing the testing for your own projects, are there particular types of research that you do? Is there a standard methodology you follow? 

JR: The reality is that none of us are researchers. I feel like I’ve got enough of a template now; I’ve got a general set of questions, I know how I set my prototypes and decks up. It’s usually in the realm of a preference test. I utilize the metrics tab like nobody’s business. I make sure that I’ve got multiple choice and think out loud questions because our team has to move so quickly. I don’t always have time to go through the highlight reels. So I use the metrics tab at the bare minimum. I can see whether people are leaning towards A or B so that I can quickly tell the design team where it’s going. And then I use the AI insights feature to summarize. I find that just going into that metrics tab first gives us a really good picture of the general results.

I don’t always have time to go through the highlight reels. So I use the metrics tab to see whether people are leaning towards A or B, and then I use the AI insights feature to summarize.

MM: So for testing a creative prototype, you would probably set it up as a desktop think out loud test. But you would ask some specific questions that have numerical rating scales, comparing the two options. The rating questions would give you a chart of the results in the metrics tab. And then also you would probably have them do a verbal response to the same question because that’s something the AI can get its teeth into and summarize. And then you can send the results over to the creative team with, “Here’s the chart and here’s a little discussion of what people said. Knock yourselves out.” Am I getting it?

The power of numbers and video together. JR: Yeah that’s exactly right. And here’s a really good example: When we did our first Real Human Insight campaign the message was, “I don’t care what they’re saying. See what they mean.” If we had just asked whether they prefer A or B we probably would have just chosen one. But there was something in the way that we had put the words together such that when they read it out loud they were pausing. When you heard them getting tripped up, that was interesting. I thought, “Okay, they get it, and it makes sense once they read it three times. But if they have to read it three times, then we need to change the message.”

It gives you the nuance you’re looking for right away. And so that’s where I think it’s helpful to have both the chart and the think out loud because you need to know both what customers like and how they get there. You’ve got three seconds to make an impression. So even if they like something else better, you need the one that gets them there quicker.

How to test naming

MM: So you test the messaging first, and then you test it again when it’s built into creative concepts. Are there any other ways you test?

JR: Another use case that I think people may appreciate is naming. When we were going through the merger between UserTesting and UserZoom, we needed to rename the podcast. We had generated something ridiculous like 60 names. We ran the names through a UserZoom survey, so we were able to force-rank which ones were resonating most, and then went in and did user tests on them. So we were able to get a higher volume of feedback.

Test creative separately by channel

JR: If I’m testing as a creative team, I’ll look for does it resonate, but I’ll often have zero visibility to how it plays on a particular platform like Instagram or Facebook. I’d love for us to be able to get down to, “this is for TikTok,” and not test the exact same thing on every platform. That feedback loop is often missing because everybody’s moving so fast.

MM: I would assume you have different customer behaviors in TikTok versus something like LinkedIn.

JR: Totally.

How to test without a lot of researcher help

MM: You mentioned that you’re not a researcher, and that you don’t necessarily have researcher help available for every decision. That’s very common in both marketing and product teams. The other day I was talking with a marketing director and asked her if she had any researcher help. And she said, “nope, no researchers.” And I said, “Okay so how do you know that you’re doing this right?” And she said, “You know, I’ve been doing it for a while and I can tell the difference between what’s a leading question and what’s not a leading question. And I don’t really need help on that anymore.”

So what’s your advice if somebody’s really new to this? Should you get a researcher to give you some hand holding the first few times until you get comfortable, then you can solo? Is that the right analogy?

Fast feedback vs. deep understandking: Know which types of test you can safely do on your own. JR: I actually think it might be the opposite. Here’s a really good example: There is no way I should do an information architecture study on my own (tree test, card sort, etc). That is a project that absolutely should have a researcher. So I think it’s identifying the right modes for the work that you’re doing.

For our brand campaign, I would almost say you could run the risk of spending too much time with research on that. Oftentimes you might have three days to pitch something. Nine times out of ten you’ve got a really good intuition, you just need that gut check of “is this going to resonate or not?” I think you need to use that discernment to apply researchers to the right level of project.

For example, we’re doing a project right now on our purchasing process. We know that we’re in this fast MVP mode and we know we’ll be revising it later. There might be some things that we want to pressure test and really dig into with a researcher, but otherwise we just need an MVP. We should be able to work between those two modes.

MM: It sounds like there are a couple of work modes. There’s a mode where you’ve got an unstructured or really difficult quandary that you need to get into, where you’re not even sure of the right questions to ask necessarily. And the more the situation is like that, the more you need a professional researcher to help you. But there’s a separate situation where you need fast feedback to validate or invalidate an idea that you have, and if you can get enough feedback to protect you from making a blind guess, that’s good enough. You don’t necessarily need a researcher for that. 

I think we confuse ourselves when we call all those things research.

JR: I totally agree.

If help is available, focus on getting the questions right. MM: Do you have advice for someone who does get access to a researcher? What would you want to have them help you with? Are there questions you should ask them or something like that?

JR: What I found really valuable was just going through the first exercise with the researchers, like formulating the questions. Almost backing into what we want to learn. What might we be missing? And just having that objective point of view for things like the website. It’s so emotional for people. And that it’s nice to have an outside point of view.

To me that was the most helpful part. And I think oftentimes that also feels like the most tedious part, like am I asking the right things? Am I missing something? Is there something I might not be thinking about as I formulate these questions? That was really helpful. But my personal experience was that the process with a researcher is just so much longer. And so there’s not a ton of researcher projects given the way that we operate. We need to refine and refine again and again.

But I do think when there are bigger things like a website, rebrand, or information architecture that could have really large downstream impacts on the way that people navigate, I think it’s important to bring the research team in.

Lessons learned: Start your test with a few participants

MM: What about problems? Are there things you’ve run into where you wish you’d done it differently, or you got stuck? Any lessons learned you’d like to share?

JR: Sometimes in test creation part I’ll get surprises. For example, I’ll think that I’ve written a question out really clearly. But if I’m testing with a product leader, I have found that they get super literal. So if I ask them to compare two concepts and they don’t see both concepts right at the start, they get worried that there’s not another one and that they may be doing something wrong. It totally changes their feedback. So you need to run the test first with one or two people before you go to the larger audience. 

Don’t assume

MM: Are there any other thoughts you’d like to share?

JR: I feel like folks make a lot of assumptions about what will work in marketing. And once you do that, you’re just building off of what someone wants instead of what’s right for the customer.

MM: There’s another word that’s come up a lot when I talk to experience creators: assumptions. Test your assumptions, recognize that assumptions are dangerous. That seems to be a recurring theme with several people. 

The optimizer: Meg Emory

Meg is the optimization lead in UT’s Digital Experience team, which manages the company’s customer-facing website. 

How to test the competition’s experience

MM: Talk to me about the ways you use UserTesting in your work. What problems have you tried to solve? And how’s it all gone?

ME: I use it for  several different things. First, competitor analysis: getting feedback on competitor ads, landing pages, and determining if their impressions change in the journey from ad click to reviewing the landing page. We did that with a handful of competitors, to inform a recent campaign. 

MM: How did you set that up? What specific tests did you run? 

Use prototypes to test competitive messages in a realistic setting. ME: I built out Figma prototypes they could click through, to make it as realistic as possible. It looked as if they were seeing the ad in a LinkedIn feed. We mostly wanted to get reactions on the overall design, the messaging, what was resonating best, and how inclined they were to take action. Overall we wanted to gauge what our competitors were saying, how they were presenting it to people, and that level of motivation to continue clicking as a benchmark to optimize our ads against.

MM: What exactly did you build in Figma? What was that like?

ME: It was a mobile phone prototype, with the LinkedIn background to set the scenario. From there I placed the different ads in there and linked them to the appropriate landing page. Some ads were more customer story focused, some were event focused, and some were just general value proposition focused. This setup allowed us to group the sentiment in our responses and organize the feedback for our separate campaigns.

MM: So the prototype was what the user would see in the test. Was this a desktop test with a simulated mobile device on the screen, saying, “suppose you saw this when you’re in LinkedIn on your mobile device”?

ME: Yes.

MM: And then, were you asking would you click, or did you just tell them to just click through and react to what they saw?

ME: It was more like click through each ad experience and then they were asked to answer a couple of questions after each ad.

MM: Got it. Were you quizzing them on both what they thought about the ad and what they thought about the landing page separately, or did you just ask them about the whole thing at the end?

ME: Separately, so that their feedback for that specific ad to landing page experience was top of mind.

MM: So did anything stand out from the things you learned? Were there any surprises or big a-has?

Create simple visual metaphors. ME: There was an ad for mixed method research that had an icon of a salad bowl. Everybody loved that. Those everyday cues resonate a lot better with people as far as grasping concepts, especially for customers that have a lower level of experience maturity. It’s kind of interesting how we can use everyday cues to sell the product in ways that are a little more accessible to people.

MM: Interesting. So visual metaphors were communicating the idea. What was it about the image that resonated with them? Was it the idea of a problem that they had, or just an idea of the functionality of the product?

ME: I think it was more of the functionality of mixed methods. The fact that they could actually see the mix in something as common as a salad, but the copy related the concept to research methods, sold them a lot more.

The power of statistical proof in your messages. MM: What were the other takeaways that you had, other than using real-world visual metaphors? 

ME: Definitely heavily relying on stats. People don’t really want the fluff, and the high level messaging is a little bit harder to relate to versus a hard stat of how something can improve.

MM: You said you used a prototype to show mobile ads. Did you test desktop ads as well?

ME: Just mobile.

MM: What was the reason for that? That’s interesting to me.

ME: We were trying to replicate mindless scrolling, where it’s not highly intentional.

MM: So on desktop I assume it’s going to be a lot more intentional usage. Is there anything else you want to mention about that test?

ME: The brevity of the landing page design is very important. I remember specifically that competitors shared customer stories, and people complained about too much text. We’ve been trying to push to have our landing page content more consolidated and direct to not overwhelm prospects and keep them focused on conversion.

Impression testing: Take 10 seconds, not five

ME: For our brand itself, we like to do some five-second tests to get first impressions of pages, especially the homepage, to make sure we engage people right off the bat. The learning from that was five seconds is way too quick. Even though people make the first impression really quickly, it’s not quite that quick to accurately get feedback in a test setting, so we used the UserZoom platform to run a 10-second test to give them more time to process. 

MM: Wow, five-second tests have been established in experience research for a long time, so I’m a little surprised that it turns out five seconds was too short. Talk to me about that.

ME: In five seconds, the first thing they notice is the UserTesting logo, then the navigation itself, and then the fact that there’s a video on the page. Then it’s done. The feedback every time was that it went way too fast, like “I’m not really sure what this page is about or who it’s for.”

MM: So the motion of the video means they take longer to process. 

ME: Yeah. It’s like a shiny object, it’s moving and engaging. So it’s kind of a double edged sword. The motion captures attention but it slows processing. The team decided we need to give them a bit more time, or we’re not going to get good feedback.

MM: And was this test mobile and desktop?

ME: This one was on desktop, but with even less real estate on mobile I’d imagine the feedback to be similar.

How to combine analytics and user tests

ME: We look at the general layouts of pages, what customers feel is important, and triangulate that with different data sources like Hotjar and Google Analytics to get the full picture. We use QXscore for that as well. It’s great to get attitude and behavioral metrics all in one place for a full picture of the experience..

MM: Could you take me through exactly how that pairing with Hotjar and analytics works? 

ME: We pair them primarily through the filters available in the integration Hotjar offers, as well as using those two to triangulate data for our A/B testing. Mutiny is the program we use to do A/B testing, and in both Hotjar and Google Analytics we have filters for different Mutiny experiment names, segments, and variables to compare click maps in Hotjar with the views and behavior we see in Google Analytics. We can actually comb through the recordings and the heat maps based on specific experiences we’re running, and then if there’s something where we don’t quite understand it, we can follow up with a UserTesting test to get some additional feedback in a day to clarify what’s happening and why. It’s really helpful to keep optimizing quickly.

MM: Okay, good. Let me be sure I’m understanding this. So you’ve got Hotjar, which you’re using for session replay, primarily.

ME: And also for heat maps and click maps. 

MM: And then you’ve also got Google Analytics tied into it to identify places where you’re not getting the flow you expected. Once you find the places where it’s not working, what’s your next step? How do you use UT, or whatever else, in order to get at the insights on what’s happening?

When you build the journey, understand what people want to know, not just what you want to tell them. ME: Here’s an example. We implemented some personalizations for people working in financial services. Pages in the experience had elements that were specific to that vertical, and we wanted to get some feedback as far as if those personalizations were even what those users were looking for, as well as where they were going in their journey, what would make sense if they were researching the company to make a purchase, and the types of content they were looking for. We wanted to know what would help persuade them and if we were hitting that mark or missing some areas to personalize. We thought they would only look at our direct conversion path, but instead many people navigated to our role-focused pages to look for solutions and more information about how the product could help them in their job role.

After running a UserTesting test to get this feedback, we ended up revising the personalizations and really homing in on those role pages. This meant updating headlines, logos, and content to be more relevant  on an industry level, so we could better reach them at that stage of their journey.

MM: So we have people coming in to a particular industry landing page, and you were finding that rather than proceeding down whatever path you’d sketched out for them, they were jumping, using website navigation, to information related to their job role.

ME: Yes. If somebody was in the financial services industry, they’re not coming to our financial services page from the homepage most of the time, they’re actually going to the page for their job role and trying to figure out how we can help someone who has that job role in financial services. We would not have learned why they were doing that without a user test.

MM: So when you’re looking at the analytics and the Hotjar recordings and all that sort of stuff, how did you decide exactly when to do a user test?

ME: It had been about a month that we weren’t really seeing the results we expected. The whole story just wasn’t there, and we weren’t able to get the why from the data. It’s a green flag to go get some qualitative data.

Run a test when you don’t have enough data to make an informed hypothesis. MM: So the trigger is that we don’t have enough data to make an informed hypothesis about what to fix. Rather than experimenting at random, you run some user tests.

Tell me about the structure of that user test in this case. Do you ask them to go to the homepage and say how you would scroll around, or to click in order to find what you need to find?

ME: The questions were a little bit more targeted to a journey stage. So “here’s the homepage. Where would you go to research this product? Where would you go to purchase the product?” More high level questions to just see what that journey looked like for them.

MM: Any other thoughts on that usage, or suggestions to anybody who’s trying to do it?

ME: Do the user test first. We let our A/B test run for a little bit too long, and I wish we had done just the prep tests ahead of time.

MM: Interesting. When I’ve talked with people about A/B testing, they’ve talked to me about pre testing alternates, vetting them ahead of time to pick the best ones. I thought that was more of a design thing, to make sure the look of the alternates was OK. But what you’re saying is to do more exploration of potential journeys first. Just ask them to come to the site and look around, before you even try to start doing any changes. Am I getting that right?

ME: It depends on the project, but for this one, yes, that was the case.

Quick design comparison tests: When there’s no time for A/B testing

ME: We do a lot of testing of design updates. When we merged UserTesting and UserZoom, we did a customer story and blog redesign and ran tests on these  to make sure the changes would resonate. We did a side by side comparison with the old site and the proposed new designs. And then we added a survey to do a little bit more of a buyer analysis, as far as who’s involved in the decision, and what content on our site resonates with them, and what that process looks like. 

MM: What exactly did you test? And what were you trying to learn?

ME: For any site design change in the last few months, we’ve been doing a side-by-side test. We have them toggle in between the two versions. The design team usually comes to us with a series of topics that they would like feedback on, kind of like areas of concern in the design process, we address those, and also formulate the questions to get feedback on the general look and feel. How does it relate to the larger brand, as well as just what do they like or dislike about it? You’d assume a new modern design, usually with a lot better accessibility, will score well. But sometimes we get feedback saying to go back to the old design or Frankenstein the two designs together. 

MM: That’s an interesting approach. Would you be willing to share the instructions that you use to set up that test, so people could see exactly how to set it up?

ME: Yeah, for sure.

Meg’s Test Plan for Side-by-Side Design Comparison

You’ll be reviewing two different web pages and then asked to rate your experience. Please remember to clearly explain your thoughts and actions out loud. Don’t pretend to be someone else; we care about your needs and thoughts.

Tasks

  1. Launch URL: https://web.usertesting.com/resources/customers/banco-sabadell
    You have been taken to a new page. When you see Page 1, move on to the next step to open the second page.
  2. Launch URL: https://www.usertesting.com/resources/customers/banco-sabadell
    Please open Page 2. Be prepared to toggle between the two windows, so you can compare the web pages side by side.
  3. Based on the content and design at the top of each page, what is your initial impression of each page? [Verbal response]
  4. Which layout at the top of the page do you prefer? Explain your answer. [Multiple choice: Page 1, Page 2]
  5. In your own words, describe the difference between the two options you just saw. [Verbal response]
  6. Please provide any feedback on the accordion design placed below the video on Page 1. [Verbal response]
  7. Which design do you prefer for the testimonials? Explain your answer. [Multiple choice: Page 1, Page 2]
  8. Which option did you prefer overall? Explain your answer. [Multiple choice: Page 1, Page 2]
  9. What, if anything, do you think Page 1 does better than Page 2 to help you learn about customer successes with UserTesting? [Verbal response]
  10. What additional feedback, if any, do you have about the two options you saw? [Verbal response]

MM: It sounds like these design tests are spiritually related to an A/B test: “Here’s the old version, here’s the new version, side by side. Give me your reactions.” Is that a fair summary?

ME: Yeah, fair summary. We also monitor the analytics after the fact. 

MM: Got it. So how do you decide when you’re going to do that sort of qualitative test to be comfortable with the new design, versus doing a formal A/B test?

ME: Timeline. For these page designs, I’ve been kind of at the mercy of the very fast migration timeline with the dev team. 

MM: So when you have the time you’ll go ahead and do the formal A/B, but when you don’t have the time, the side by side comparison test is the next best thing?

ME: Yeah, it’s almost like getting the pre-launch feedback for a prototype or a product and  iterating as we go. The time constraints make you more creative with your testing.

Testing an offer: How much hand-holding is needed?

MM: Are there any other test plans that could be useful for a marketer in your sort of role?  

ME: Yeah, I think the study that we just ran on a new offering purchase journey is a good example. That one was really interesting because we went back and forth regarding just how much hand holding to do within the test to make sure we got relevant feedback. 

There’s always an interesting balance where we want to give participants enough information so they understand the test, but we don’t want to tell them exactly how to do it. We prepared a whole separate list of step-by-step task instructions we could have given the participants to make sure they even got into the page flow we wanted feedback on, but thankfully the more unguided version ended up working out really well. Since this test focused on getting feedback on the flow itself, we ended up running a follow up test as well that gave even more vague instructions to see if users would choose the flow without any sort of scenario set up.

I think it’s a good example of the balance between hand-holding to get the results you need, but then being vague enough to let them explore and give useful feedback.

Here’s the test plan we landed on:

Meg’s Test Plan: Balancing Hand-Holding with Tasks

UserTesting is launching two new project-based services available for purchase to any buyers. These options are off-the-shelf studies that provide insights to teams that want to outsource end-to-end experience research, and get the insights to guide their decision making in less than two weeks. For this test, imagine that you are a small-to-medium business interested in these services. However, you are not yet comfortable doing your own testing but you have an immediate need for insights.

Tasks

  1. [Meg comment: The first three steps are hand-holding to get the participant into the flow.] Launch URL: https://web.usertesting.com/homepage-cloned-testing
    Once the new page fully loads, move on to the next step.
  2. Click on the start here button in the hero section that overlays the video.
  3. Click the card on the left as if you were going to evaluate UserTesting for your company.
  4. Review the page and explain what you believe the difference is between the two options. Don’t click anywhere yet. [Verbal response] [Meg comment: The key was including specifics on when to stop and be more mindful without being too detailed about where they should be in the process. Same with step 9-10. Due to task layout, even when a contributor took the “wrong” path, they eventually figured out they were in the wrong spot due to how the tasks were written, and course-corrected on their own.]
  5. What, if any, questions or concerns do you have? [Written response]
  6. Based on the scenario you read at the beginning of the test, which offering do you think would be the best fit for your company? Please click on that option.
  7. How well does this next step meet your expectations? Please share your thoughts out loud. [5-point Rating scale: Poor to Excellent]
  8. Please rate how well you understand the offering(s). Please share your thoughts out loud. [5-point Rating scale: Not at all clear to Very clear]
  9. Before clicking anywhere else on the website, please read the next step very carefully.
  10. Continue clicking to the next step in this process until you have reached what you believe to be the checkout page for this offering (if you have not gotten there already). For EACH new page you land on, review the page and answer the following questions out loud. 1. How well you understand the offering presented on a scale of 1-5 (1 being a poor understanding, 5 being an excellent understanding)? 2. What, if any, questions or concerns do you have about the offering or the experience? [Verbal response]

Guided Tasks (for reference; these are the more detailed tasks that were prepared but ultimately not needed)

  • Click on the start here button in the hero section that overlays the video.
  • Click the card on the left as if you were going to evaluate UserTesting for your company.
  • Review the page and explain what you believe the difference is between the two options. Don’t click anywhere yet.
  • What, if any, questions or concerns do you have?
  • Based on the scenario you read at the beginning of the test, which offering do you think would be the best fit for your company?
  • Please click on the card on the right for platform solutions.
  • How well does this next step meet your expectations? Please share your thoughts out loud.
  • Please rate how well you understand the offering. Explain your answer. [Rating scale]
  • How do you think this offering would benefit your company?
  • Click the back button in your browser.
  • Now click the card on the left side of the screen for Insights Services.
  • Please review the page, but don’t click anywhere yet.
  • Please rate how well you understand the offering. Explain your answer. [Rating scale]
  • What, if any, questions or concerns do you have?
  • Select which of the two offerings you think would be the most beneficial for your startup to quickly test their prototype.
  • Review the page and explain your thoughts out loud.
  • Please rate how well you understand the offering. Explain your answer. [Rating scale]
  • What, if any, questions or concerns do you have?
  • To move forward with this offering for your company, where would you click?
  • Does this step meet your expectations? Explain your answer. [Rating Scale]

Replace assumptions with understanding

MM: What would you say if you were talking to somebody who’s in a marketing team and is new to user tests, and they asked you, “What should I be using this for? What problems is it going to solve for me?”

ME: It gives understanding. It’s so easy to make our own assumptions, but without this piece it’s impossible to get the whole picture.

I think of a company I previously worked for where I kept pushing for a product like this so we could make more informed decisions, and couldn’t get it approved due to budget constraints. We based so many decisions off of our assumption, and it left that human aspect out and resulted in small and slow incremental changes. I think you have to listen to the people on the other end. It’s like, why are you wasting the team’s time when you could go in with a very informed decision?

The project leader: LeTísha Shaw

LeTisha is a senior director of product marketing at UserTesting, and is leading a project to increase the company’s online distribution channels. She discussed how she is using user tests in that process.

Test iteratively throughout the project

Start with needs. LS: We started with needs discovery: What kind of capabilities would interest you depending on your job role? From testing, we learned that many of respondents in our target audience only cared about the basics — finding the right audience, having a  dashboard to view results, auto generated reporting on the metrics — and had a do-it-yourself orientation. There was also  a degree of variation depending on whether they have people inside or outside the company who can help them. We also decided to reach out to disqualified leads for additional discovery; could we have convinced them to buy in this new channel?

During development, test every element. LS: Our testing did not stop there. During the development process, we ran a total of 14 studies on everything ranging from naming, packaging, and positioning to messaging comprehension on webpages and email journeys. This included separate tests on five of the web pages in our online journey focused on whether or not visitors 1) understand the information on the page and 2) find the call to action compelling. For each of the email sequences we tested messaging comprehension, including whether the  subject line resonates and if the  call to action is compelling.   

Test e-commerce flows. LS: We also conducted an ecommerce flow test, which is a guided study to see if the user can get through the purchase flow as expected. This involved testing a live prototype of the website and checking whether they could succeed with the task as well as how they felt about it. Based on that research, we made several modifications, including removing a step from our purchasing flow, increasing readability, and adding elements to guide navigation.

After you make a change, validate that it worked. LS: We used to think that all we needed to run was one test, but we quickly learned ways to optimize our approach. Before launching the full  test, we run a pre-check of each test with a single participant, to be sure there aren’t any  bugs in the test. Initially, we thought we got it right the first time and ran tests to validate those experiences. Later, we learned that we often needed to run a second test to validate that the changes  to the experience, based on the insights from the initial test, were implemented to see the desired impact.

So as a marketer, it’s beneficial to test repeatedly as you develop the website experience and create accompanying campaigns. That’s a new motion for many marketers. 

We used to think that all we needed to run was one test, but we learned that we often needed to run a second test to validate the changes based on the initial test. It’s beneficial to test repeatedly as you develop. That’s a new motion for many marketers. 

You should be doing a process of multiple iterative tests for every project. For example, check your messaging once with just the words, and then again with the buttons and calls to action added. Out of that testing we learned we had too many steps, too many clicks, etc.

What we learned from the testing

Although we were not able to implement all of the learnings, we emerged from the experience with a much deeper understanding of our target audience. Across all of our tests, we learned that for this audience we have to keep messaging clear, succinct, and straightforward:

  • Our packaging was initially too complicated and difficult to understand without additional education. We chose to simplify the packaging versus  educating prospects  on the details. We found we needed to modify the call to action, as that set very different expectations based on what  a prospect expects to happen next in their buying journey. .
  • We simplified the email campaigns and removed the emails that did not resonate. We altered subject lines and email templates to make the emails more  visually compelling. 

Test instead of debating. We could have spent a lot of time in meetings debating which headline, image, copy, or click path is better. None of those debates would have revealed any blindspots, either. Instead of debating, we used the insights from testing to improve buyer and customer experiences before we rolled them out.  

If you can test it first, do that and you’ll thank yourself later. 

The product marketer: Tom Valentin

Tom is a senior director of product marketing at UserTesting. His team supervises the marketing for most of UT’s product offerings.

MM: What problems do user tests solve for marketers?

TV: Marketing can mean so many things, and the functions in it are so varied. A brand marketer in a consumer packaged goods company as compared to me as a tech product marketer are very different. Or think about a copy writer at an agency vs. a marketing ops person in a big company.

Two ways that exerience tests help CMOs: brand perception and revenue. TV: If you are talking to a CMO, they care about two things: one is the brand perception and the other is your ability to drive revenue. User tests can help those two things. The brand one is around risk mitigation, making sure that changes to your marketing resonate with customers before they go to the market. 

There’s also a competitive angle to brand: How does your brand experience stack up against a competitor? For example, I am CMO of Delta, how do I stack up against Alaska Airlines? An experience test can answer that quickly.

As for driving revenue, the usage is much like the way product teams use us: optimize your web, ecommerce, retail, point of sale, etc. The challenge there is that marketing teams are so used to relying on numbers to make decisions. They don’t understand how to include the human angle.

The role of experience tests in product marketing. MM: What about use in product marketing?

TV: Product marketing is the most nebulous role in marketing. My own experience in product marketing at AT&T looked nothing like what I do at UserTesting. Sometimes product marketing is like being a product manager, sometimes it’s being a marketing thought leader. In UT product marketing, we use user tests for message testing, persona understanding, competitive analysis, and naming. We don’t test visual creatives because we don’t create those.

I will push someone who owns a product release to test how the messages are resonating and understand the words we are using. Showing the company that our message resonates is helpful. The message tests help with internal alignment and external validation.

MM: Do you use tests to understand customer personas?

TV: That comes through the other tests. I test the messages on different audiences: designer, researcher, etc. The results tell us how they think. I have done some tests to understand their tools and processes, to understand the context in which they operate. That’s a little different from a UX test, it’s about how do you sell to them, how do you make the message relevant to them? Hopefully our product research checked some of that, but it’s one thing to build a product and another to message it.

MM: Do you do any competitive testing?

TV: It’s incredibly easy in our contributor network to find people who have used our competitors. We’ll do card sorting and tree testing to understand the space, what they liked about competitive offerings. We do a lot to check positioning, what you say about the product to a persona. We try to understand what they are already doing and how they feel about it.

Naming does not come up all that often, but like the brand stuff it’s high risk if you screw it up. It requires more alignment internally. Having feedback from a user test is almost imperative, because there are a lot of conflicting opinions. 

Naming is high risk if you screw it up. It requires more alignment internally. Having feedback from a user test is almost imperative, because there are a lot of conflicting opinions. 

How to ensure quality in testing. MM: How do you ensure quality in the tests your team does? 

TV: I jump right into it to see the results. That points to what I screwed up. “Oh, I led them down this path, let me reframe it.” It takes trial and error. 

The screeners are incredibly important, getting those right is as important as anything. I’d rather get mediocre questions with the right person than the other way around. We have predefined screeners for our audiences. I start with them and then tweak them. It’s an art.

MM: Do you have any researchers in the team to help you?

TV: No, there are no researchers in the team. That’s where the trial and error comes in. Most marketiers have taken some level of research classes. I can put together a pretty good survey and think out loud test, but I am pretty capable. Surveys are easier. They are inherent in a lot of marketing jobs, being able to review them is part of the job.

Start with competitive testing. MM: Do you have any advice for a product marketer getting started in user tests?

TV: Do a little competitive research — it’s easy and eye opening, and regardless of your role you should have an interest in it. You are not making a decision or building out your persona or battle card. 

Don’t do naming to start with; that is more intense and frustrating.

Additional reading

Here are some additional resources on the use of human insight in marketing:

How to Drive More Effective Marketing with Fast Human Insight. Detailed instructions on how to use fast feedback to create content that resonates with customers.

How NRG Energy fine-tuned its mascot

How Athletic Greens optimized its messaging and imagery

Photo by hanahiraku on Pexels.

The opinions expressed in this publication are those of the authors. They do not necessarily reflect the opinions or views of UserTesting or its affiliates.

Most Popular

Related