Summary

The UX world has made a lot of progress in the last six years, but it still has a long way to go. Competition from AB testing and design agencies, which was a big issue six years ago, has now transformed into synergy. But the fail-fast philosophy of agile, and companies’ delusion that usability can be deferred to version two of a project, are at least as big a problem as they were six years ago, maybe more. AI is emerging as either a new assistant or a new challenge (possibly both), but perhaps the biggest barrier to success for UX professionals remains our own struggle to communicate our value and insights to the rest of the company in compelling  business terms that they’ll understand.

Lee Duddell is an OG in the UX world. In 2009 he founded What Users Do, the leading user research company in the UK. His company later merged with UserZoom, which has now combined with UserTesting. In 2017, Lee gave a popular speech describing the leading enemies of UX, a speech that was widely discussed online.

We talked with Lee to update his list. We wanted to see the progress we’ve made in UX, what remains to be fixed, and what new challenges have emerged. 

Our conversation has been edited for length and clarity.


Mike: Give me the backstory on how you came up with that presentation. And where did you present it?

Lee: When I created that presentation, UX research, particularly remote UX research, was really just getting started. People didn’t believe that you could run UX research remotely or that it would be any good. I remember being invited to the Northern UX Conference in England, which is now quite a big conference, because another speaker had pulled out. I had like 24 hours to prepare. As I was on the train thinking about what I wanted to talk about, I wrote down seven or eight things that were in my mind from  engaging with people in the market. I thought, “why don’t I just turn this into a list of things to talk about and call it enemies of UX and do a countdown and let people contribute to it as well?” I wanted to kind of resonate with them, but also challenge a little bit of the way that a lot of people were working seven or eight years ago.


Mike: Let’s go through the list and update it. For each one, do you still believe that’s a barrier? Do you view it differently? Let’s start at the bottom of the list at number 10.

10. AB testing

Lee: It was kind of an enemy then because a lot of people thought they were doing right by the end user because they’d got a successful AB test and therefore the experience had been improved. What probably had been improved was some measurement of conversion rates, average order value, or more nebulous stuff like time on page. Everybody was hiring conversion rate optimizers. Or there were independent agents advising companies that the metric to indicate that their programs were successful was how many AB tests they had live.

“How many tests have we run this month?” “Oh, we’ve run a thousand AB tests.” And that was kind of the headline number. A number that showed effort, rather than impact.

I noticed a few things. One was that the headline number soon became meaningless when a lot of those tests would plateau. When you first adopt AB testing, I would close to guarantee that your first 10-20 tests will be really good. You get a winner because you’re able to touch something that you’ve never been able to touch. There’s always some low hanging fruit. Even without lots of user research, you’re gonna get some winners and that’s great, but it’s soon plateaus.

Now I’ve read somewhere that less than 20% of AB tests are actually conclusive in any way.

Less than 20% of AB tests are actually conclusive in any way.

The reason I thought AB testing then was an enemy is that so many of the experiments were just based on guesswork. Pure guesswork. Like, “oh, I think the customer would prefer this button here.” Or “I think this copy will be more compelling for people to buy than another piece of copy.” It’s problematic if you’re not solving real problems that customers have.

I think that might be true today as well. But I see more AB testing teams using insight, particularly using qualitative insight, to kind of inform and drive their AB testing program. And I think there’s more emphasis now on the performance of AB testing rather than just the pure volume.

It’s got to be better to run three high-performing AB tests than it is to do a thousand with five of them making a difference.


Mike:  I think seven or eight years ago, AB testing was an enemy in the sense that it gave the illusion of giving a comprehensive understanding of customers. I think today a lot of companies are more nuanced in their understanding of what AB testing can do for them. Is it fair to say that AB testing has turned into kind of an opportunity or an ally, or at least a frenemy?

Lee: Yeah. I think it has turned into an opportunity as people seek to have more impactful AB tests.

One of the best AB tests that should happen is if a conversion rate department compared their user research-driven AB tests with their hunch-based AB tests. I know anecdotally that a few companies have done that and they’ve stuck with the user research driven AB tests because they’re just so much more impactful.

9. Design agencies

Lee: At the time, agencies would always describe themselves as creative. They came from a background where maybe they were doing marketing and they started moving into the digital space. They were full of creative designers who did a great job at wowing their client stakeholders. But they never felt that they really needed to understand the ultimate end user of the design or the consumer of the design.

And very few of them back then actually had any user research capability in their offering.

Boy, that’s changed. This is really transformed. I think some of it has been driven by customers. Now we see a bunch of agencies looking to partner. So today I wouldn’t list design agencies as an enemy of UX. I would take them out of the list.

8. Doing it all

Lee: At the time a lot of people were starting to hire their first UX researcher. And UX researchers tended to go in with lots of ambition, wanting to change the organization and immediately flip it into being customer centered. 

But you know what, change takes time. And people need to be convinced of the change. And doing all of those things like managing stakeholders, proving the value, et cetera, back then was even more important. 

So the researchers went in and kind of burned themselves out trying to do loads and loads of studies here, there everywhere, but never stood back and asked, “what should I do? What should I run that’s actually gonna move the needle forward in terms of the way that this organization values UX research?” Rather than just get in and do loads of research.

I think that’s changed. There are more UX researchers and people have become more experienced and we’re seeing UX research leaders emerge. I think we are more thoughtful as a community about how we engage with the wider organization that we’re part of.

So, I think I’d probably take it out of the list. I’m not saying it’s solved, and I’m not saying that companies everywhere value UX research as much as they should. But even if we don’t necessarily know the solution, we know that it is a problem we need to solve. They know it’s a danger. Don’t just jump in and promise everything and try to do it all at once.

7. Asking users

Lee: I went from something quite strategic about how we prove the value of UX research, into something that’s highly tactical, which is, “how do you actually get meaningful insight?”

At the time, so many people were relying on onsite surveys to give them insight into ux. And I’m not saying an online survey can’t do that, but a poorly designed online survey definitely can’t do that.

And people went crazy. A bunch of stakeholders came in and said, “here are all the things I need to know from visitors to our site.” And you’d end up with 10 questions or 12 questions, and lots of open text boxes and lots of scales and it goes on and on and on.

Whereas I suggested back then that the best use of an on-site survey could be to drive further usability tests. For example, can you just ask [users] what you’re trying to do. Once you know something about intent you could then go flip that into tasks for your usability test.

Here’s a screen shot of what those on-site surveys were like:

A website survey circa 2015

The other problem I had with it is that you need to know what users do, not what they said they do. Or what they can remember. It’s better to observe than to ask people to recall what was good or bad.


Mike: Let me ask a related question: Social listening—extracting from social media commentary the things that people complain about—is that a version of this problem in the sense that you’re looking at things that people say rather than observing what they do?

Lee: Well, I guess you can get some insight from social listening into brand and into maybe some CX customer service problems you need to think about.

But it’s rare that anybody posts, “Hey, that was a great smooth experience on this site.” So it has inherent bias in there. Again, you could use some of that insight to determine what tests and what research to run on your site, so that you can observe people.

6. Labs

Lee: This was really controversial at the time, because most people were doing user research in labs. I didn’t stand there and say you should close your lab down and use remote research, even though  I wanted to. But what I did think about was that they’re just potentially so heavyweight and laborious.

I’m not saying the researchers and others shouldn’t sit with an end user or spend time talking to them in person. It’s not that, but it’s the way that you end up structuring the research around it. That was my concern.

One of the big issues is that people tended to crowd loads of research questions into a session. Like, “oh my God, we’ve got someone coming in for an hour, let’s use that.” It meant that you’d end up asking loads and loads of things of a participant, and it ended up being almost overwhelming for them because everybody wanted a piece of it.

And also, they were expensive. When you add up running labs versus running remote research, it didn’t necessarily make financial sense, and often people were using them for the wrong kind of things, they were doing very simple tactical design testing in labs rather than what I think they’re more suited for, which is potentially in-depth interviews or testing something that’s physical.

But Covid <laugh> is not an enemy of remote ux. In fact, it’s been the absolute opposite of being an enemy of remote ux. It has forced loads of people to think differently about in-person research. And as sad as it is to say, that’s driven lots of efficiency and a change in the way that people approach the research that they do.

5. “The re’s”

Mike: The fifth one is “the re’s”: re-platform, re-engineer, repurpose, refresh, reimagine. Talk about that.

Lee: I was trying to say larger transformational initiatives when someone’s re-platforming, re-imagining, big redesigns, those kind of significant changes are often driven by technology that completely misses the user.

Digital transformation is the most extreme version of the re’s, but unfortunately it doesn’t begin with a “re.” But I’m sure we’re still allowed to include it.

When Marks & Spencer, a massive UK retailer, re-platformed, their e-commerce team spent millions and millions on it, and it absolutely bombed from a usability perspective (link). And their sales dropped as a result of it.

And the reason that everything bombed is that the focus had been on technology and this big project that just had to happen, had to be delivered. It was in their annual report. And they just hadn’t included any level of user research or usability testing at all. And it just panned and bombed.

It’s old, but it’s a good one.


Mike: Is this still happening?

Lee: Yeah, I think actually the pace of digital transformations has got faster, more people are doing it. This report shows about 70% of Chief Technology Officers in parts of Europe and the Middle East see digital transformation as a high priority in 2023. And depending on where you are in the world, some places are just really getting started with it.

We’re seeing more people run user research, but I still think a lot of these bigger programs don’t start with the user. They don’t start with the customer. So the ethos of them, the way they operate is risky, really risky.

A lot of these programs don’t start with the customer. So the way they operate is really risky.

And what you’re really risking here is spending a lot of time and effort on rework, doing all of this big stuff and then having to rework it once it’s in the wild and you figure that it hasn’t worked as well as it should.

And for the sake of integrating user testing into the process, which isn’t necessarily expensive in terms of time and money, you can save yourself a fortune and possibly your job if you are the leader of this.


Mike: The pattern that we see sometimes with companies is that they will do the user research very late in the design process to double check that it’s okay with users, at which point it’s too late to change most of the decisions. Not just the small interface decisions, but the very broad decisions about what customer problem we’re solving and what are the key things we need to push on. Is that related to this, that you really need to get the user insights very early in the process?

Lee: Yeah, I think it is, in the way a lot of companies work just on the day-to-day, let alone on bigger projects. But in bigger projects there’s just more pressure on people. And they’re more complex, aren’t they? And people feel that there’s enough complexity without bringing in the kind of user angle as well

I think there’s two things there. One is, is this big thing actually solving real problems that customers have? So that’s kind of what we call discovery research. And the second is validating design very late, rather than actually having a go / no-go based upon testing during earlier design phases.

I’d probably keep it in the list.

4. Focus groups

Mike: This is one of my favorite subjects. Talk to me about focus groups.

Lee: I think they might be quite good for marketers who are talking about marketing and branding things, even though I think one of the early inventors of focus groups reckons they might not be that good for that now.

But for UX, for understanding digital journeys in particular, I think focus groups are used a lot less than they were, which is good, but I think they are still used for that purpose. And they’re just useless for that.

Because we’re typically on our own when we’re using our devices. And everything that’s wrong with focus groups like groupthink and the most vocal person leading the room just means there’s not a lot you can learn about UX from a focus group.

It’s one of those things that sounds like science. And people understand, oh, you’re getting lots of people in a room, so surely that’s got to be good. People might think that, we did 10, we got 20, that’s surely better than one or two.

I’ve not heard of any customer in the last four or five years using a focus group for digital experience. So probably take it out as an enemy of UX at the moment.


Mike: I agree, although I would add that I’ve occasionally had customers ask for the ability to do focus groups through the system. Like, “you can do remote interviews, can you do remote focus groups as well?” You want to say yes, of course, but at the same time you want to say, “that’s crazy, don’t do that. You’re gonna damage yourselves.”

Lee: A trusted advisor would say, no, you don’t wanna do that. I think that’s probably what I would reach for if a customer asks.

3. The UXers themselves

Lee: I remember standing up in front of everyone. I asked people to shout out the best UX research book they’ve read recently, or the best UX book. And people came up with the usual things.

And you know what? I held up a sales training book and said,” this is probably the most important thing that you could read if you want to push UX research forward, because you need to become a salesperson. Not only do you need to become great at managing stakeholders, managing the organization, doing all of those things, but you need to become really, really good at sales because that’s what you are doing internally day in and day out. And you need to start linking UX research to business metrics, to the metrics that the business understands.”

If you want to push UX research forward, you need to become really, really good at sales because that’s what you are doing internally day in and day out. And you need to start linking UX research to the business metrics that business understands.

That’s still the case. And it’s increasingly the case when the economy is in an unusual situation, which it is now. It’s increasingly the case when there’s finite budget.

When we were discussing this with people at the conference, one guy gave us a tip. He said, it’s a bit like a reverse salesperson. I went into my company and I could see what they were doing. They were launching loads of stuff without testing it. And they brought me in as a solo UX researcher. So I just waited for something big to fail. And then I explained and presented to the senior team how they could have de-risked that failure with UX research and why they should do it.

And he said that was the best thing because then they could correlate it directly to a business impact.


Mike: We recently published on the Center for Human Insight an interview with a very prominent product management consultant. His whole pitch is that product managers, in order to be successful, have to translate their findings and their work into business terms rather than explain things in engineering terms.

He said none of the usual engineering arguments like “it’s more elegant” or “this will make us more effective” matter to the most senior management in the company. It has to all be translated into dollars. “This will cost you $50 million in revenue and it will cost you half a million dollars to implement. Do you want to do it or not?”

The one thing I disagreed with him on was that he said the product manager has to be the person to make that translation into business terms. And I thought, no, everyone has to do it. If we talk about the most effective designers, the most effective UX research people, they are capable of translating their stuff into business terms. Because otherwise the company just won’t engage. It views them as advisors.

2 and 1: Ask the audience

Mike: For enemies two and one, you turned it over to the audience.

Lee: I did, let’s see what they said:

Enemy: “Dark patterns and an organization’s attempt to make more money at the expense of improving user experience.”

I don’t hear that a lot these days. There used to be lots of dark patterns. Forgive me for saying this to anyone who’s ever worked in search engine optimization (SEO), but there were a bunch of dark patterns in SEO, do you know what I mean? But it’s died right down now.

Enemy: “Too many cooks.”

I think that was more like who owns UX and UX research?

We still face that. Is there ux representation at board level? And if there is, does that board member even know what the thing is? And I think that remains for many companies a challenge

Enemy: “Phase two.”

This is an interesting one, and this actually ties back a little bit to AB testing as well. People say, “Just let’s get it launched despite the usability problems and then we’ll come back and fix it.” And actually people don’t come back to it. Maybe they do it a bit more now as product management has moved on and we’re a bit more sophisticated and we manage backlogs in a more sophisticated way, but parking everything to phase two or to post-launch still happens and is pretty dangerous, because often people are tempted to move on to the next big shiny thing rather than what’s already out there.

Enemy: “Agile.”

When people talk about Agile and UX research, they talk about the speed of research within Agile and whether agile is set up or can be adapted to support more discovery. And those are fair things. But actually research and speed of Agile is pretty much a solved problem.

The bigger thing for me is that back then and even until now, I think Agile has kind of sucked the air out of the room when it comes to the senior team. It has almost taken the stage and kicked both design thinking and user research off the stage because the senior team could only deal with so much and Agile seemed like the nirvana that everyone should head to in order to overcome all of those issues that people have with technology and talent development and digital.

And that became such a focus for so many organizations. I think it was happening at that time and it’s continuing to happen, but certainly back then it was happening at a time when design thinking and more professional design and user research and user centeredness was also trying to get some kind of space with senior people.

But Agile just took all of that space.

There’s some weird things in Agile, some of the words as well. Like they talk about customers in there, but they don’t mean customers <laugh>, they mean people who own the product.


Mike: When I started at UserTesting, in about this same time period, I would go to UX-related conferences and the design people would all get together and complain about how they were forced to do their work in one-week to two-week sprints in synchronization with the engineers, which meant that there was no chance to get ahead of issues around user experience. They were all pushing for design sprints at the time. Or to run design in parallel to or ahead of the engineering.

I still see that, but I think some of that has been worked out. Where I see the limitation now with Agile is two things. First, I think there’s still a lack of awareness that it’s possible to get user feedback on things really rapidly. I still get a lot of product people who say, “I don’t do that sort of stuff. It takes two weeks to get feedback, therefore I’m not going to do it because I can’t sacrifice that amount of time on a regular basis.”

And to me, they’re doing the research wrong if it takes two to three weeks to get it. But they don’t understand that it’s possible to change that.

The other thing is I see is a lot of religion and assumptions around the concept of failing fast. And it’s almost like it’s evil to try to think ahead of time about what customers might want or do. You’re better to simply throw stuff out there, see which things stick to the wall, and do more of that. And it’s ethically wrong to try to second guess what’s gonna happen and instead you should just be throwing things randomly into the market.

And the flip side that is, a lot of people who believe that their gut for understanding customers is really, really good. So it’s, “I can just figure out what we should try. That’s good enough.”

I think both of those things are barriers. I’m curious, are you seeing those, or do you not see it that way?

Lee: There is an absolute drive to release code, yeah. And to just update and improve and change as quickly as possible.

It’s almost, to go back to the AB testing thing, where it used to be the number of experiments that mattered. Now it’s, “how much code have we released this week?”

So that still exists and many product teams are obsessed with how much they do, how much they create, you know, the kind of feature factory type thing.

The second point was the belief, especially on the part of product managers, that they have such a good intuitive understanding of customers that no research is necessary.

I don’t know that they think no research is necessary. I think they feel it adds time and complexity and they’ll work it out once it’s live. If it sticks, great. If it doesn’t, we’ll come back and fix it. Which then, as we know, never happens.. You’re driving future rework that you’re actually realistically not going to have time to do. And that can be expensive!


Mike: Is there anything else that comes to mind that was not on the list seven or eight years ago that you would put on the list today? Emerging enemies to UX?

Lee: Anything that’s related to artificial intelligence and how to intelligently deploy AI within user research. Where can we leverage AI? There’s some obvious areas around analysis. We’re looking at it and that seems cool. Other areas are around how to improve study design and all that kind of stuff. And how to identify the right people to recruit.

All those things are great. My nervousness is if AI itself takes the human out of the equation. That’s what I’m starting to see some discussion of: “can we just push everything we know, from having done some previous research, into AI and somehow magically it can predict human behavior?”

Now it might be able to predict, but it’s still a prediction. I think it’s still super important to keep the human in all of the research that we do. 

AI might be able to make that process simpler and easier and guide us better. And AI might incidentally really help with some of the stuff we’ve been doing manually when we try to democratize.

So AI can be a lovely guide for people who are new to user research just to help them out, to be that automated review of their study design. I think it’s got a tremendous opportunity, but it’s also threatening if we think AI can just take the human out.

Design polls on LinkedIn. Another thing I’m seeing emerging is, I’m all for a quick poll or a quick survey, but I don’t like to see them on LinkedIn, where people ask for an opinion from not even their target market, but from other professionals. That’s slightly better than asking the person that sat next to you in your office, because at least they’re from outside your organization, but they’re still just reviewing and giving an opinion rather than using.

I’m seeing this pop up on LinkedIn quite a bit at the moment. You know, “should we go for design A or design B?” I don’t mind things being cheap, but it’s like a weird form of AB testing, but only with professionals who are nowhere near your end users.

Authenticity. I’m also going to give you a positive trend. It’s a geopolitical one as well.

These are the years of authenticity. That’s why people like Trump and Boris Johnson got elected, because they’re not politicians, who are thought to be not authentic. People perceive them to be more like themselves and to be authentic in some way because they don’t talk like politicians.

I think everywhere, certainly in the western world and maybe beyond, we’re seeing a desire for authenticity. Authentic people, authentic brands that don’t damage the environment, that are real, that are transparent.

Everywhere in the western world and maybe beyond, we’re seeing a desire for authenticity: authentic people, authentic brands that don’t damage the environment, that are real, that are transparent.

And that is a fantastic trend for anybody working in user research because we can get behind that and work out whether a brand, a person, or some communication is authentic.

Personalization. One thing I remember thinking about at the time I made the original presentation was personalization. It wasn’t necessarily personalization itself, but it was more that there was an absolute belief that everything should be personalized at the time.

It was like, “Consumers want personalization. When we surveyed consumers, 86% said they’d like stuff that was relevant to them.” But that doesn’t give you a carte blanche to then pile in with personalization technology and just rely on the technology to personalize without doing any research around it. That’s a good example of a kind of tech-driven top-down thing.

I see a substitution effect where companies assume that because they have adopted some personalization technology that it’s not necessary to do user research anymore. We don’t need to understand customers in aggregate because it’s going to be customized for each individual anyway. There’s always an excuse not to do it.

Belief in the accuracy of guesses. Let’s come back to one that you spoke about before, which was the product owners believing that they innately know what end users want. This might be a good one to conclude on. One challenge we have is that user research, this kind of testing, is in itself challenging. It actually challenges what they believe they know about their customers and their end users and the things they might have been saying forever, which is, “oh, our customers want this, don’t want that,” da da. It challenges whether they are actually the customer whisperer that they think they are.

It challenges whether they are actually the customer whisperer that they think they are.

Mike: I can’t even count the number of times I have spoken with people who use our sort of technology, and they’ll say to me something like, “I thought I really understood my customers, and then I did some user tests and I found out” — it’s always phrased like this – “I was sooo wrong.” It’s like they’ve had a religious awakening as to what their limitations are.

Lee: I think maybe not everybody realizes it can be that dramatic, but it will be for some people and it will really challenge them.

Ominous image of UX enemies by Dall-E 2.

The opinions expressed in this publication are those of the authors. They do not necessarily reflect the opinions or views of UserTesting or its affiliates.