Summary

In the last several years, UserTesting has rebuilt its product design process to incorporate human insight at every step, and has scaled research to empower designers and product managers to gather their own insights. The transformation was led by Jason Giles, the company’s VP of product design, and Duncan Shingleton, director in charge of product research and design strategy. As part of the transformation they created a new product research process and merged two product teams that had different cultures and practices.

In this discussion they describe their journey: changes they made, problems along the way, and lessons for companies making their own insight transformations. Topics include:

  • Creating a research culture
  • The role of research ops
  • How to organize tactical and strategic research
  • How scaled research can go wrong, and ways to fix it
  • How to ensure quality in scaled research
  • How UT manages the research process
  • Tips for companies that have limited research resources

Getting started: How to introduce a new way of work

MM: When you first arrived at UT, was there a design research process? And how many people were working on it?

Jason Giles (left) and Duncan Shingleton

JG: When I came into the team we had eight designers, four researchers, and we had just hired a technical writer with the intent to establish UX writing. They had just jumped into scaling with both legs: “Hey, everybody should be doing research,” and so the designers were doing research. The researchers were trying to support them as well as do some of the more complex research. It was a good, exciting, but partially naive start. The intent was there. The spirit was willing. I would say the flesh was a little weak. The quality of the research was not always what we wanted it to be.

MM: How do you fix that situation? And any lessons on how to avoid it in the first place? 

JG: The great opportunity with joining a new company is, you can reset the lay of the land and reset expectations between roles. That’s always nice because I can come in and say, “hey, I’ve noticed this behavior. This is incorrect. This is how you should be thinking about it. And this is the behavior I want to see, primarily the relationship between design and research.”

What quickly became apparent in the case of UserTesting is that we were fundamentally introducing a new way of work. So let’s consider what we did before as an “alpha”. We use that to assess what was working and what wasn’t, quality of research being performed being chief among those, and then we said, just like any other problem, let’s design the right solution to address these needs. And we did that inclusively as a team: How might we get to a place where we have confidence in the decisions we make, and we can scale and empower other non researchers? So we started sketching things out.

It was also at that time that our chief experience officer was talking to a lot of customers. They wanted us to give them tips on how to deploy experience research throughout their organizations. So we decided to use our own product transformation as a testbed to develop those tips and best practices.

Primarily, the people working on it were my researchers plus my heads of design, because I wanted shared skin in the game. And we did a beta test of a new process. The process included setting up initial “rules of the road”. For example, you couldn’t launch a test until a researcher had reviewed your test plan. Or, here’s a set of templates that we’ve created that you can run your own test with.

We also set up office hours, to facilitate those kind of reviews, twice a week. 

This wasn’t just within my team. It was also expanded out to the product management team, which was great. We didn’t just target the designers, we also engaged enthusiastic PMs. The initial goal was to get some champions internally, because I’ve found that if you can find a few folks that are really showing success, it goes viral: “Hey, this saved me so much time. I have so much more confidence in what we’re building!” The next PM is like, “Hey, what is this thing that Joey was doing? Can you tell me more about it?”

And finally, we had very clear executive support. I mean, all the way up to Andy, our CEO. Our CTO was also a huge champion as not only was this a great way for us to “drink our own champagne,” but he’s also motivated to create great products, not wasting engineering time on rework.

So that was phase two, which we rolled out quite successfully. I would say it probably took us about a year and a half to get all of the designers up to a point where they were familiar with the core types of validation research, or even doing some discovery work.

I would say it probably took us about a year and a half to get all of the designers up to a point where they were familiar with the core types of validation research

Of course, there are some designers that have more aptitude for it. And so they would ask the researchers, “when we do this other methodology, can I shadow you on it?” So some folks actually went much beyond what I would expect.

Research also got added into our career development ladders.  So for designers, it was expected that they be proficient to a certain level in core research methodologies. Then, as we hired people in, that changed what we were looking for. We could set that expectation from the outset: Have you ever done research before? If you haven’t, are you open to it? And some folks were like, “Oh, my gosh! I haven’t had the opportunity, but I’ve always wanted to try that.” Over time we built this nice level of proficiency. Thanks to the types of research that we did, and with the gates that we put in place for oversight by the research team, we were getting quality results that I felt comfortable in.

How scaling can go wrong

MM: Have you seen scaling efforts that failed? If so, what went wrong with them?

JG: In one role, at a company I’d rather not specify, immediately when I came in I saw that there were some big design decisions that to me just did not look intuitive or proper. And so I asked the team about it. They said, “No, no, no, no. We researched the hell out of this, and it’s absolutely the right way to go.” So at first I took it at face value, but it really gnawed at me. Fortunately we still had all the raw data from tests. So I went back and looked at the tests and realized that the tests were written to reinforce the decision that the design director had wanted to make. And so they were simply invalid tests. I mean, they were completely contrived, giving the folks the sense that they had confidence in this design decision! Unfortunately we subsequently lived with that decision for years as it was so baked into the product.

And so I guess the naivete was, “hey, if we just give designers these tools, they will be objective and use good judgment.” They will know how to use these in ways that are appropriate and that we would still get high quality outcomes. And when I talked to my research manager at the time, he explained, it’s just really kind of tough because now we’ve handed over the keys of sanity to the designers and they feel like they’re empowered now with customer evidence to make design decisions that we are not comfortable with. And so there was almost a sense of disempowerment by my researchers when I first joined that team.

How does a design exec know when research is good quality?

MM: Say someone is a design executive who doesn’t have as heavy a background in research as you have. How do you know when the research is good quality? How do you know what’s right versus what’s self-serving?

JG: One, I always hire a trained expert as one of my first moves, if one is not already there. While I’ve managed research teams for many years, I’m a designer by trade. There is a school of thought that having research report into design is problematic because it can’t stay pure, it’s always going to be biased. I’m aware of that risk, so I’m very mindful of it. I need an outspoken and courageous voice at my side who’s willing to keep me honest.

And then ultimately, I’m responsible for the end user experience. So I had better feel confident about the decisions that we’re making and be able to prove them out. That leaves little room for self-serving, contrived research. Plus, I want to continue to invest in our team and grow and show the value. And with the research results that we’re getting, it gives me more ability to do that. 

Typically when I’m coming into a company, my job isn’t just to design a great product and build a stellar team, it’s to influence the way that decision makers and our executive staff think about their customers. Research is so critical to that. The customer stories, elevating a video of somebody crying when they’re trying to do something like buy tickets to a concert, it’s such an effective tool, and that’s why it’s become such a critical part of what I do.

My job isn’t just to design a great product and build a stellar team, it’s to influence the way that decision makers and our executive staff think about their customers

To answer your question, you just have to be aware that you might be biased. Don’t get me wrong. I know I don’t have all the answers, but I’ve got some really strong points of view about customer experiences. But it’s nice for me, as a leader, to express really strong opinions knowing that I’ve got the safety net of prototyping it to see what our customers think. It’s actually very liberating to me and unlocks more risk-taking and innovation. Sometimes I get a home run and sometimes I’m proved wrong, but that’s okay.

DS: You’re asking the question, how do I know if good research has been done? I would say, if you’re not seeing the expected results of that research in the product that’s live with your customers, and you’re seeing journeys fail, and you’re seeing drop off, and you’re seeing negative NPS even though you’re doing research, that might be an indicator that the research you’re doing is in some way biasing you to the wrong outcome. Because if you’re doing quality research in an unbiased way that’s actively understanding risk and working with the customers to understand need, then you should see the measures that matter to you changing. If they’re not changing, it might be an indication that the research is not being run quite right.

How to organize for successful scaling

MM. Let’s assume the head of design wants to make sure these things are working right, and that they’re going to hire a head of research who’s good. What else do they need to do organizationally, in terms of the way the researchers report, or the way their goals are structured? Do you have to explicitly say to everybody in your team, “hey, the researcher is independent. They’re empowered to tell you that your work sucks”? I’m trying to picture myself as a head of design who’s trying to make this happen, but maybe doesn’t have your level of experience. What do they need to set up in order to give this a good chance of the research working? I’m interested in both of your perspectives on that.

JG: I had to do this at TIcketmaster, where they didn’t have a formal research function in the beginning. It’s kind of both the carrot and the stick: “We’re introducing this new role. They are accountable for providing us these kinds of decisions and these kinds of quality controls and to give feedback to the company.” So there’s clear accountability. And you also list the benefits that you’re gonna get. And here’s the kind of expectation that I have for the way that we work together. This isn’t just a service request. I expect that for design exploration kickoffs, you’ve got a researcher in the room. I get specific about certain behaviors that I’ve known from my experience won’t happen unless I say them explicitly.

It’s like any other role, right? If you’re introducing a new role, you need to set expectations. But then, also, here’s why you should be excited about it.

DS: I agree with all that. I think there’s also something around creating psychological safety. The introduction of a researcher role will inherently tell us where we’re failing more than where we’re succeeding. And sometimes it’s hard to have the critique brought to your work when maybe you thought you were doing A-grade work all the time, and then someone starts marking it like, that’s C grade. It’s D grade. But that’s okay, because we all understand how we get it to a grade. 

It’s the culture of failure, right? Research will indicate where we are failing. Maybe there might have not been that formal lens put on work in the past. And designers need to know, it’s okay that a researcher is going to come and tell you where your work is suboptimal. But you’re not gonna get penalized for that. That’s not gonna impact your career. We’re not gonna get pulled into a meeting room and start talking about your performance. We’re gonna use the research not only to improve the product, but improve you as a designer, improve how we think. 

Designers need to know it’s okay that a researcher is going to tell you where your work is suboptimal. You’re not gonna get penalized for that.

So it’s about safety: understanding that as soon as you bring real rigorous critique into the room, there’s a risk that we expose our flaws. And it’s okay to do that.

MM: This reminds me a little bit of some descriptions of agile, where you say it’s okay to fail as long as you learn from it. Is that fair? 

JG: Absolutely. 

DS: We want to be doing that quickly, so we know where our designs are not working. It’s a great, great analogy.

Merging different research cultures: The transition to scaled research

MM: So that’s the history. Talk to me about the organization today. How’s it structured? How’s it working? Are there new frontiers you’re working on?

JG: Always. Most recently, we merged the two companies together, UserTesting and UserZoom. We realized that there were two different organizational models that the two companies had. In UserTesting, we had our scaled model. That was not the way that research was aligned in UserZoom. Research reported into the product organization, not under design, and it was much more of a service bureau where specific research was requested. Only researchers did that research. And it was amazingly high quality, of course.

We had to decide what we were going to do moving forward. Because we had seen the effectiveness and all the goodness of the scaled program, I quickly decided that we were gonna continue down that path.

Half the team of designers and writers were very comfortable with this approach as this was how we were set up before, but others had all sorts of concerns and questions. “When am I going to have time for this?” “I don’t know how to do it.” “This makes me really nervous.” But also excitement: “Oh, my gosh, this seems really cool.”

So we kind of had to start back at the beginning with those folks. But unlike before, we had peer designers who were very comfortable and had worked with their researcher collaboratively for a long time. They were able to both model the behavior, but then also help out: “Hey, you’ve never done this type of a card sort before, let me help you get it set up, and then we’ll review it with the researcher.” The feedback was almost unanimous … folks were very excited. One of our more junior designers talked to me after he had run his first test. He just was almost jumping out of his skin because he was just so proud and excited to have performed his first test. 

DS: We had a nice baseline, but we had to do that transition.

During the migration discussions, I had conversations with all the researchers from both sides. As you can imagine, folks who were from the centralized, product-managed model were used to working more around roadmap validation, and informing what was coming down the pipe. They had concerns around, “wait a minute, we just don’t wanna be validation monkeys.” But there was also openness to, “we’ll give it a try, and we do see the opportunities there.” 

That was also the point that we decided to invest in a more formal research operations function. If we’re gonna do this at scale, there’s just a lot more to think about. For example, standing up a panel specifically with our customers that use a breadth of our product offerings, that takes a lot of work. So that was something that we invested in due to the merger.

Plan for research ops from the start

MM: Do you have any rule of thumb, based on the size or complexity of a design team, for when you need that research ops function to make everything work?

DS: From the start. I think it’s like any other part of the business. If you scale without process, as soon as you reach a certain size you kind of fall down. It’s similar to standardizing your thinking about design systems and workflows; the sooner you start that the better. I think the earlier you bring in research ops, the smoother your rollout and growth of research in your organization will be. I mean, we don’t have a particularly large research team here. Becky, who is our research ops person, is in a full time role. She spends all of her time helping with recruitment, improving processes, documenting the way we do things, looking at how we work with legal and finance to standardize incentives. This is a large-scale activity that can’t happen off the side of someone’s desk.

So actually having someone thinking about it, whether it’s a full time role or a carved percentage of someone’s week is the way to go. 

JG: If I’m starting a team and I’ve got five heads, I’m not going to hire a full time research ops person. I’m gonna hire a researcher that has explicit accountability with a percentage of time dedicated to research ops activities. This also happens to be true with design ops.

MM: I’m finding that the research ops role varies tremendously from company to company. Duncan, what are the key things that you want Research Ops to do for you? What do they need to drive?

DS: I’ll give you the highlights of the job description. Defining our process around how we do research here at UserTesting, both within the research team and how we democratize, in terms of the methods we use, the documentation, and the educational rollout program about how we use those methods. They’re there to standardize how we disseminate insights, and we use Enjoy HQ for that,  reminding our researchers, myself included, that a project isn’t done yet because it’s not in the repository.

They also plan our panel, how we build that, how we think about incentives. And also how we collect and articulate the return of investment as well. They help me write business cases that I can take to my management and say why we need more headcount. That is always quite hard in an internal research team versus a professional services team. While they have a pipeline with dollars attached, I’ve got to think of other ways in which I can help the organization understand the value of having more research heads.

JG: There’s probably things that Duncan does that could fall under research ops as well. Some of the administrative stuff, setting up the regular team meetings and sharing the notes from them. The role is also typically responsible for the career ladders of the research discipline. 

DS: The key for me is we get to a place where the way in which we do research is not tied to the researcher. It’s independent of that. So if someone’s interviewing to come here, this is how we do research. You walk in the door and you get the playbook, so if one researcher is out and another researcher needs to go and support a team, the impact is negligible. 

Does the research ops person need to be a trained researcher?

MM: A discussion that I have sometimes with research ops people is, does the research ops person need to be a trained researcher? What’s your take?

DS: Yes, they do. 

Could I write the schematics of how to repair a car without having been trained as an automotive engineer? I need to have practiced that craft in order to be able to understand the playbook I need to write. How can you teach someone else if you have never done it? 

JG: The alternative is that you are going to have to be a facilitator of conversations that the team needs to make decisions, for example “should we use super Q? Or should we use this?” 

DS: It’s also about trust and confidence of the other researchers in the processes that are being established by the ops person. Our ops person has been doing testing for the company for seven years. She holds a Ph.D. She has credibility and gravitas when she makes a recommendation. Everyone in the team has been a stakeholder in creating our processes, but ultimately she has the final accountability for defining what our process is. Someone that’s never done a piece of research in their life would find incredible friction in convincing other researchers that this is the right course of action. Convincing another researcher on how to use a method have they never used? That would be very, very difficult.

MM: FYI, that is not the standard that I’m hearing from a lot of the companies that I talk to. I’d say the majority of the time people tell me that the ops person does not need to be a skilled practitioner because they’re focused so much on just the mechanics of the process. I think maybe they’re treating the role as a little bit more junior and a little bit more mechanical than you are.

JG: Interesting. I’m not sure I agree with that approach.

DS: Becky is our most senior researcher. That enables me, as the person responsible for trying to activate this in the organization, to have confidence that she is shepherding the other researchers in the right way. 

JG: The operational roles often don’t get the credit they deserve. When I was at another company, there was a designer on another team who was really struggling with doing good work. And so they built a design ops role for them, hoping that might be a better fit. But because this person wasn’t a credible designer, it was really tough for them to drive the needed change. I would imagine the same problem with research ops.

DS: How am I empowering my ops person to be successful in that role if I’m always the gatekeeper between them and the rest of business? The business needs to see credibility in them.

What if there’s no time for research?

MM: Jason, you talked about forcing the designers to think about what research they will need for a project, and allocating time for it. That presupposes a situation in which the design team is able to set its own schedules. Often when talking to designers at other companies, I get the impression that they don’t feel they are in control of their own time. I’ll hear things like, “they just want me to draw them a design in a single sprint” or things like that. So the design element is just a step in the product dev process, and the schedule for the dev process is set external to the designers. Does that situation sound familiar to you? If so, how common do you think it is? I’m wondering if there isn’t a deeper issue about design maturity at many companies that makes it hard for them to deal with the issues we’re raising…

JG: Oh yeah, very common. “Agile” has introduced a nightmare for design teams. That’s why I’m a huge fan of dual-track agile, as it’s really the only way I’ve found to get teams out ahead of the voracious engineering machine. And to do that, you need to be able to estimate how much design/research time is needed. That’s what the design plan is for. Think through what is needed, plan the activities, then estimate the timeline. Often designers complain “I don’t have any time to do proper design!” Then they’re asked, “well, what do you need?” “I don’t know!”.

The design plan is just that, a plan of attack, open for discussion. It gives design leadership the confidence that key mandated activities are happening (eg, critique, user feedback, accessibility reviews), gives design managers a tool to mentor their designers (eg. use this activity instead, you should time-box this activity), gives PMs the opportunity to build the activities into the product plan, sets expectations with engineering teams … the list goes on.

And yes, having design activities reflected in the product plan is a higher level of maturity. Even for us, we are tracking these externally currently. The next step is to bring those into the tools product/engineering use for more visibility, accountability, etc (eg. Jira, ProductBoard). But those tools aren’t designed to be used that way, nor by the design persona, so we have to hack them (which always makes it a fight every time I make a team do it).

How experience research can drive the success of the company (and how it can fail)

MM: Do you have a favorite project in which research made a big difference? 

JG. I have two. The first one was when the two companies came together, we had to make a lot of assumptions about the technologies we would use, as speed to market was really high. Specifically, chose to go down the path of using technology Option A, for our future moderated product. Three months went by and then we hit some challenges on the technical side.  And so the team started thinking, “wait a minute, maybe we should consider Option B.”

We could have spent months and months changing course. But we had chosen that option due to the quality of the user experience. And so we went and asked our customers. Within two days, we had feedback saying, “No, please stay on the course that you’re on. The experience, the functionality, the capabilities … Option A is much preferred.” It basically shut down all the conversations immediately. So the team sucked up the technical challenges and moved forward. It just totally cut short a ton of churn, and possibly a wrong decision that we could have pursued. And literally all it took was a few days to make the right decision.

The second example was in concept testing. We had a bunch of ideas on capabilities to explore and performed a quick quant survey with prototypes of 20 different concepts that we could invest in. A week later we walked into a key planning meeting and said, “Look, here’s where we gotta focus. Here’s the stuff that we have to do first.” Now, the thing at the very bottom of the list was a capability that a team was really excited about pursuing. Based on the research, we were able to properly scope that particular work and prioritize other capabilities that were more valued by our customers.

How to avoid doing too much research

MM: We’re talking about how you’re scaling research to designers. Is the next step to also empower product managers to be able to do their own research?

JG.  Opportunistically, when there’s certain PM’s that show an aptitude or a desire, we roll them right in, the same as a designer.  

DS: We track, as part of our process, what projects are under what we call full service (being led by a user researcher) versus what’s self service (led by another discipline, but supported by a user researcher). A lot of the requests in the self-service space are product managers conducting research.

We’re helping our PM’s to understand where they need to use research to provide confidence in the planning of their activity; where they’re breaking down their initiatives into epics, ensuring the work is sufficiently scoped, prioritized and the value to the customer clearly defined. This is especially important when the wider delivery team, including designers and engineers, are trying to make trade off decisions about where they can potentially revisit scope and quality to accelerate delivery.  We use Pendo’s prioritization matrix to help our PM’s think about where research most needs to happen, and then we have a tiering model mapped to the 4 stages of research to help them understand the timelines required to execute on the research.

We also help our product directors to understand value across all initiatives their PM’s are tackling.  Whenever a piece of work is assigned, it’s understandably 100% important to the person who’s working on it. But how important is that initiative in relation to all other exercises that are happening within the organization? It might be a hundred percent important to me, but only 1% important to the business.  

Helping our entire product team to regularly understand the value of their work, in relation to all other work, helps us move from hearing positive feedback in isolation at an initiative level, to more broadly thinking about it at a roadmap level. Just because something in isolation might be the right thing to do, our new thinking on an existing feature is much “better”,  does it also mean it’s the right thing to be committing resources to now in terms of everything we could possibly deliver?  Having a high-level view of research activities across the board and regularly comparing value,  is critical to mitigate that.

How to strike a balance between tactical and strategic research

JG: This is a really important point: What does scaling allow our researchers to do? It’s more strategic research like comparing features across different investments. If all of the research resources were down within the squad, they would just be creating great, beautiful little features that might not create the value that we’re hoping to get versus looking strategically across the business.

DS: That’s one of the changes we’ve made recently in the structure of our team. We had our four product areas, and we had a senior researcher sitting in each product area. And that senior researcher naturally got drawn down into more of the evaluative work of designs, because that’s the closest to the coalface.

And so we’ve made a change in the structure of the research team to bring in a senior level of researcher who is working with their senior peers in design and product and engineering, to think broadly and strategically across the groups. And we brought in a junior strata of the team whose predominant role is just to execute against that stuff that’s closest to the ground. 

Sometimes there’s a temptation for organizations to bring in lots of senior researchers with loads of experience, thinking that will create velocity. But actually often there’s as much junior-esque research that needs to be done. If you don’t have junior researchers, the seniors are drawn down into that space. They’re often left supporting  the loudest voice in the room, which is “get the design into shape so we can deliver.”

UserTesting’s process for managing tactical and strategic experience research

MM: So the goal is that the senior people are for prioritization and direction, and the junior people are for velocity. You need to balance both. Then the whole point of scaling or democratization would be that by scaling to non-researchers you get even more resources to drive velocity. You’re not gonna use the scaled research to make the big strategic decisions, you’re gonna use that to support the small tactical decisions. And that’s what frees the more skilled researchers, and presumably the more skilled strategists on the design side and on the product side, to collaborate about the long term.

So, Duncan, take me through the process you use to manage all of this.

DS: We have a playbook that we have been working on for the last six months that goes through all of the various aspects of the process, and the documents that are needed for our process.

Our model is to embed research in other domains. We don’t expect people to come to us, we go to them. And so researchers are embedded in each of our product domains. We have them there to listen and understand the unknowns that people have from the product space and the design space. 

Conceptual chart: How UT product development teams are organized

DS: We have an intake form that captures new requests for research. What it is, who it’s coming from, their role, the decision that they’re trying to make, what’s the consequence of not having that knowledge? 

The research team meets every day to triage new requests. We triage them and assign them out to the research leads. It then falls to the leads to assign those out to the most pertinent researcher to work on them. Usually it’s the junior that’s assigned to their space. But sometimes it could be me or another researcher in the team depending upon the activity that comes in. 

UserTesting’s experience research planning and prioritization process

We capture a backlog. A research plan template is sent back to the individual who submitted the request. We have a structured form that asks a set of questions. We decide methodologies in that and the questions we try to answer. Who’s the type of participant that we need? Do we need to incentivize that research? So there’s a lot of documentation that sits behind the scenes. 

A video walkthrough of the UserTesting research template (no sound)

Ultimately it ends in Monday.com, which we use to manage our backlog.

UserTesting’s experience research backlog

We prioritize constantly with the leads to make sure the backlog is in the right order. And then once a project starts it just walks through a normal research process: write a plan, conduct the research, perform the analysis, disseminate it, throw it into EnjoyHQ, and move it to “Done”. 

Embed research into the product process from the start

The real difference that we’ve made, compared to standard research processes, is embedding the researcher into all the product conversations from the start. Sometimes research is the last discipline that comes onto the field. Historically, maybe the designer or PM have never had a dedicated researcher to work alongside them. They forget to add the researcher into key meetings, and we have to go and remind them. That’s more effective than expecting people to come to our drop-in sessions. Research goes and engages in design and product conversations, rather than asking those disciplines to come and engage in research conversations.

Research goes and engages in design and product conversations, rather than asking those disciplines to come and engage in research conversations.

JG: You’ll note that this is an important evolution from where we started with “research office hours,” where we expected people to come to the research team. 

Two key elements: Triage, and using the design process to force research discussions

JG: There’s two things that I wanted to layer on. 

That triage process is super important for me as a leader, because this is an opportunity for Duncan and the team to say, “If there isn’t going to be an action taken on this research, we’re not going to do it.” So there’s a gate there where we say we already know this information, or when we consider the capacity that we currently have, we’re going to deprioritize it. And then we set expectations with those teams. That triage is actually really important because sometimes we already know the question being asked. They’ll be like, “Hey, I want to do this research.” But we’ve studied this a hundred times, or it’s already a known industry pattern. So you don’t need to do this research. One of the unique problems we face is that sometimes we’re actually researching too much. That’s a problem that a lot of teams probably don’t have. 

The other thing is that from the design side, we’ve also standardized our processes. Every project that we’re gonna start kicking off, we have a design plan. And this is a document primarily for designer’s purposes, but it has sections that identify the persona, outlines the design activities to perform, establishes the accessibility plan, etc. And there’s also a section for what research activities are going to be done. In order to fill that in, the designer has to reach out to the researcher and say, “Hey, here’s this new project. What do you suggest as research activities?” 

So there’s this required step in the design process itself. While I love that the researchers are putting themselves in environments where they can proactively get involved, from the design side we’re also making it a required step to engage with the research team.

MM: Let me be sure that I’m understanding that. In order for a design project to go forward, it has to call out what research you’re going to do. And the designer has to have a conversation with a researcher to determine what that’s gonna be. Even if we’ll let you run the research yourself, you still have to check in with a researcher to make sure they’re okay with that. Right?

JG: Exactly right. And if the designers are going to do the research themselves, that will affect their timeline which they can plan for. 

DS: So we’re identifying that research need, and that can come from a designer reaching to us, or us reaching to them. And then it just follows the process: Understand the complexity of it, what’s needed? Can you self-serve on it? Is it your first time doing this methodology so we’re going to support you through it, or hey, you’ve done this three times before, we have confidence in you. Or you can self-serve on the test plan building, but you can’t self-serve on the analysis. 

So we’re having that conversation to understand the need, the competency, and whether it’s a known known. And then it’s just following plans and executing, but with very clear expectations about what we need and expect from our stakeholder and what our stakeholder needs and expects from us. 

Tailor the degree of scaling to the skills of the non-researcher

We create that shared understanding about what we are both bringing to this party. It’s quite consistent from our end, but will vary depending upon the person who’s engaging us and what discipline they’re engaging us from. We don’t engage with designers in one way, blanket. We engage with Josie like this, because she’s got great experience running research, and we have high confidence that she’s good in these methods. We engage with Simon differently. We engage with Amelia differently. Each designer is almost on their own research journey, and our researchers are very good at knowing who that individual is and their capabilities and what support that they need. It might be that we never get them to the point of self-serving. That’s not a failure. That’s just a person who needs support more than other people. They’re just all different.

How to scale when you have few researchers to supervise the process

MM: Companies tend to get stuck in a binary decision of either I’m not going to allow any non-researchers to do tests, or if I do it, it’s gonna be the Wild West, and they’re gonna be free to do whatever they want. What you’re describing is a very controlled process in which decision-making about research still touches the researchers, but the execution can be delegated to people appropriately on a case by case basis. And that’s a different mental model than I think a lot of our customers have.

I do want to challenge you guys a little bit about resource levels. Jason, you took me through the ratio of researchers to designers that we have. It’s still majority designers, but UT’s ratio of researchers to designers is stronger on the research side than the typical ratio that we see at other companies.

If you’re operating within a much more restricted resource base for research, are there any best practices on how to make that work?

JG: Getting to know the individual level of a designer and their comfort level requires time. If you don’t have that time, or if you’ve got one researcher that’s supporting 12-15 designers, you might have to get a little bit more rigid about the type of research that can be done self-serve. And you templatize it.

MM: So because there is less time for conversation and nuance, you substitute some inflexibility. That keeps the process on the rails. You make it more black and white: “You’ve got to do a validation test on a sketch, use this template, go make it happen.” It’s not going to be as perfect as it would be if a researcher could be in the loop, but it’s on average it’s better than no research at all.

JG: Exactly.

If you can’t mandate scaling, you have to persuade

MM: Is there anything else you wanted to bring up?

JG: One of the benefits of research reporting into design is that I can mandate stuff. When you asked about empowering product managers to run tests, I have to do that via influence. I need to convince their management team that it’s a high enough priority that it should take precedence over something else. So that would be an additional step that I would need to do, because I don’t own that discipline.

MM: So if you had a particular PM who wanted to do research, it sounds like you would accommodate them. 

JG: Absolutely! I love those gems!

MM: But if it was going to be a mandate — like all PMs are going to run discovery research for their projects — that’s a different thing. Because then you’re looking for a rule from the head of product, and you can’t do that yourself.

JG:  That’s right. A simple mandate might be “every feature kickoff has a discovery call with a customer.” Simple, we just start small. You put in the small mandate, just to get the thing in there and it becomes a checklist in their things to do. “Did you do a product discovery call with a prospect or customer? No? Okay, remember, that’s what we’re doing now.”

DS: The important thing is the practice of user research. To say that is the domain of a specific discipline, the user researcher, is only going to lead companies to fail in their efforts to understand their customers. Researchers are typically at the bottom of the ratio. If we look at the wider industry, there is usually one researcher to every five PM’s and five designers in an organization. Saying that user research can only be done by user researchers will just create a blocker for any product team. So you’ve got to trust your other disciplines to execute, and you’ve got to build executive buy-in as to why those disciplines should be executing. 

The important thing is the practice of user research. To say that is the domain of a specific discipline, the user researcher, is only going to lead companies to fail in their efforts to understand their customers.

In my opinion it all comes back to time. We all have say 40 hours a week, so we need to agree what ratio of time a PM or designer is going to spend doing user research. When is the practice of user research viable to be a facet of their role because they can dedicate the time required, and when is it feasible because they have appropriate levels of proficiency?  When it’s neither viable nor feasible that’s when we need to have one of our user researchers step in to execute on the research, and in that way we’re enabling our PMs or designers to spend the time on the other things that are core to their job, whilst our researchers gather on their behalf  the missing knowledge that’s preventing them from making a decision

We’re trying not to create an “ivory tower of research,” but collectively as a product team,  we’re all trying to answer the question of, “How do you identify the important steps where the practice of user research needs to happen? Who’s best placed to be doing it?” We’re trying to be pragmatic and embrace that the importance of the practice of user research is not that it needs  to be done by this particular discipline, but simply that it needs to be done.

JG: When I think of my mission, it’s not just about building a great product and a great team culture. It is about influencing a customer-focused company, building a business. And so anybody who is willing to go and talk to a customer face to face — whether it’s an engineer who’s like, “Hey, I’d love to like sit in on this moderated session,” or it’s an executive that’s like, “Hey, I realize I haven’t talked to a customer in a couple of months like, let’s set up a Customer Advisory Board” — it’s about culture change and a mentality of a company. And so you use every means possible to change that culture.

Find friendlies: How to start building customer insight into the product culture

MM: If somebody wants to start on this path — say they don’t have the right culture at this point, but they’re bought in on this idea, and they want to start driving this transformation — is there one place where they ought to try to apply research first? Is there a magic first step on this path?

JG:  I’ve done this a lot. I start by canvassing for likely candidates who are “friendlies”. I talk to the head of finance about what problems they’re trying to solve. I talk to the CEO and CTO and ask myself, are there opportunities with influencers where I can go help them solve a problem or have more confidence in a  decision?

So it’s kind of canvassing to see where’s the opportunity to demonstrate impact. I actually don’t care if it’s with the product team or the HR team, because it grows on itself. Once you get a champion, they tell their peers, who ask “Why isn’t our design team doing that for us?” “Well, I don’t know!?” “Let’s go talk to the design team!”

It’s really finding who the champions are. And then let’s assume that there’s a bunch of champions. Then you’re focusing on what’s going to have the most visible impact, because you’re wanting to start a movement.

DS: What’s the purpose of research? The purpose of research is to gather knowledge to create shared understanding. The consequence of not having shared understanding is conflict. So I’d be looking for the greatest amount of conflict happening in my business. That’s the place where I need to start doing research to create shared understanding, and reduce conflict.

JG: Oh, that’s good. I am writing that down. I like his answer.

MM: I think there’s maybe a subtle difference between the two answers that you gave me. Duncan, I think that was a specific research-linked one, whereas Jason, I think I almost felt like you were talking about how do you start getting design thinking, for lack of a better phrase, into the organization. Or customer experience thinking.

So I think you’re both right.

The opinions expressed in this publication are those of the authors. They do not necessarily reflect the opinions or views of UserTesting or its affiliates.