How should product research and product management work together? At many companies, it’s like mixing oil and water. The two functions work on different timelines, have different priorities, and even use different vocabularies. We often hear from product managers who are frustrated by their inability to get research when they need it, and from researchers who complain that the product team doesn’t listen to them. 

Over the last two years, UserTesting has restructured its research, design, and product management processes to bring them closer together. We talked with two people in the middle of the change: Kevin Legere, a senior director of product management, and Poppy Smithers, a manager of product research. They described what has changed and what they learned along the way, including:

  • Product management problems that can be solved through research
  • How to structure the relationship between PM and research
  • How to do research in the aggressive timelines needed by product teams
  • The benefits of democratization/scaling for both research and PM
  • Best practices for scaling

(Note: This conversation was edited for length and clarity.)

The relationship between research and PM is broken at many companies

KL: It was really eye-opening to me when I went to the Human Insight Summit, and I heard what our customers had to say. Every day I tried to sit with a different group of people. I just wanted to talk to customers. More often than not, I ended up at a table with UX researchers and product designers. Our customers would see my name tag and job title and say, “you’re a product manager. My product manager doesn’t listen to my recommendations.” Or, “we put in all this work and we do research and we think we have great evidence, and then product management decides to go in a different direction.”

It was really surprising for me to hear that. It started this little fire inside of me that I needed to help other product managers understand the value of the information that’s being given to them. I understand there’s more aspects than just the research and the evidence; there’s level of effort and there’s constraints that you’re under. But if you build a partnership with your researcher and they understand those same constraints, you can get aligned on the direction and make those decisions together.

I think that partnership is what Poppy and I have been building over time. Poppy is invited to the meetings where those decisions are being made and those constraints are being talked about. She’s involved in helping to make those decisions and that helps us stay aligned. 

I think now, with our journey together, I really depend on Poppy probably more than ever and couldn’t imagine doing my job without the insights that she brings to the team. I want other product managers to have that same luxury and realization. 

How to use research to align and defend decisions

MM: Why were you surprised when the researchers said that the product managers don’t listen to them? Was it just that you’d never had that conversation with researchers before?

KL: I guess I’m not surprised that it happens, knowing some of the product leaders that I’ve worked for in the past who go more off gut than evidence. The thing that surprised me was that it was a really common theme in every group I talked to at The Human insight Summit. 

I have a degree in math and statistics, so my decision making is really rooted in logic. But being a product manager, you need to make a lot of decisions with imperfect information, and you’re trying to make the best decision you can, given what you know. In my experience over the last several months, our decision making is so much better with the information we’re able to gather from research and how fast it is.

Once you lean on that evidence, then every time someone’s coming back and questioning why you made a decision, you can point them to that evidence you have from the research and any other information or constraints that we had to consider in making it. Bringing all those things together into a story and a decision makes it so much easier, especially for big companies, to help everybody understand why you decided to go a certain way. 

Every time someone’s coming back and questioning why you made a decision, you can point them to that evidence you have from the research

MM: So is the benefit that you get the team aligned around the decision in the first place? Or is it that after you’ve made the decision, you can explain it to others so they don’t mess with you and they support where you’re trying to go? 

KL: I think it’s a little bit of both. You try to make the decision together as a team as much as you can. Obviously, the bigger that group of people is, sometimes the harder it is. But you need to identify who’s the core team of people to help make that decision. And then once that’s made, the bigger your company is, the harder it is to communicate the decision and the reasons behind it. If you have all of that information captured, you have the evidence to support it, the constraints that were taken into consideration, then it’s really easy to give them a link to where all that is copied and you don’t need to book another meeting to go through everything all over again. 

Research doesn’t have to be slow

MM: Poppy, you’ve been with the company for about a year and a half, and you were doing research at other places before that. Based on your background and the folks you know in research, were you surprised by the story that Kevin heard at the conference?

PS: On the whole, I’m not surprised. Generally there is this misconception that research is a blocker and something that takes a long, long time. I think many product teams view research as a nice-to-have if you have the time, but often decisions need to be made quickly and people assume that they don’t have enough time for research.

Many product teams view research as a nice-to-have if you have the time, but often decisions need to be made quickly and people assume that they don’t have enough time for research.

However, I’m a firm believer that research can come in all shapes and sizes. Some pieces can take eight weeks if you’re doing a massive strategic discovery. But we’ve turned around research to unblock things within a week, two weeks. I think that’s a struggle to explain to a PM who’s not as invested in research or who doesn’t actively partner with the research team. I’m very lucky with Kevin because he sees the benefits of research, trusts us to provide insights within an agreed timeline, and views it more as an enabler than a blocker.  

If you need to make a decision quickly, set a deadline for the research

KL: One thing that has really helped is we’ve used time as a constraint for how we get evidence that we need. We’ve put timelines on things where we need to make a decision really quickly, like by next Friday we need to bring together all the research and any other information we have and make a decision. We’ve used time as a healthy constraint to how deep to go on some of that research. So instead of it being a blocker, we just say, when do we need to make this decision? And then Poppy works the magic to get as much as we can get within that constraint. And I think that’s worked really well.

PS: There was one example where we had one week to make a strategic decision, and needed to understand what would be best for our customers. However, as it was a big strategic piece, there was no chance of getting all of the required research completed in this timeframe. As a result, Kevin and I agreed to break the research down, with the plan to provide some initial quantitative findings within a week, followed by the qualitative analysis three weeks later. Whilst analyzing the quant results, I was able to skim through the qualitative data to get a sense of the rationale behind the findings, but wasn’t under pressure to conduct the full analysis straightaway. This approach really worked for us in this scenario, as it enabled us to make a decision off the back of some high-level quant stats, with the added bonus of the qual data later. So, it’s about not thinking of every research project as this massive thing, because it can be chunked up. 

To manage the workload, involve PM in decisions about research priority

MM: Poppy, doing this sort of close engagement with the issues Kevin’s working on implies that you’ve got some time set aside for that. Sometimes when I talk to researchers, they have a huge backlog of requests, because there’s always more desire for research than there is time available to do it. And so being able to respond really quickly to a request from product becomes difficult because they’re so booked up. I’m curious about how you manage the workload. How do you free up the time to be responsive like this? 

PS: There are definitely exceptions where we can’t do things in the requested time frame. Underneath Kevin, there are four different PMs and for them, every research piece that they put forward is really high priority. So it’s quite hard to juggle that. But Kevin has a more strategic view. So I’ll go to him and say, “there are 20 things on my backlog, what’s the priority?” And he’ll help me reprioritize. And we have regular catch-ups to do that, which is really helpful. 

And then when I have to go back to a PM and say, I’m going to have to postpone that piece of research because this other priority’s come up, it’s backed up by Kevin who has a view of what the strategic priorities are for the company. 

MM: Are you full-time on Kevin’s part of product management, or are there other organizations you’re working with as well? 

PS: I’ve recently taken on another product management team as well, but that’s only been within the last couple of weeks. So it’s kind of TBD how I’m gonna cope with it when I’m juggling both. But I’ve spoken to Duncan [who manages the research function] about it, and I think if we get to that point where there are competing priorities, then we just get Kevin and Karan, who’s the other PM, to discuss that together and we’ll collectively figure out the best course of action. 

MM: Sometimes people ask me about the ratio of product managers to researchers. Previously it was you doing research with Kevin and four other product managers, so that was one to five. And then what will the ratio be now with Karan’s team as well? 

PS: We’ve got four on Karan’s team as well. But we’ve restructured the research team slightly, so I’m the strategic researcher who has oversight of all the different teams and plans the backlog. And then within that we’re growing our team of researchers. So we’ve got one junior researcher who then executes on a lot of that work. We used to have one researcher per domain. 

Democratizing research to designers frees time to engage with product management

MM: So it’ll be two of you with about nine to 10 product managers. And then how many designers loop into this? Are you also doing research directly for the designers or does that all come through Kevin? 

PS: Following the merger between UT and UZ, we are now supporting democratization of research. Prior to the merger, I would conduct a lot of the evaluative research myself. Now that the designers are leading on their own research, they will run it and I’ll just support them and answer any questions they have along the way. Their backlog of research is managed at the design lead level. So their manager will help them in prioritizing their own research projects. 

Researchers supervise all studies, even the ones they don’t run

MM: So you’re sort of more a supporting resource for them, like if they get stuck or something like that. Do you also review their research before they can launch it, or are they just free to go do what they need to unless they get stuck? 

PS: We’ve got an intake form, which all of our research goes through. So whether a designer wants to conduct it themselves or it’s something that the research team would conduct, everything goes through our intake form. This allows me to track everything, because even though I don’t dedicate as much of my time to supporting, it all adds up. So I’ve got to be aware of it. 

Following this I’ll triage the request, catch up with the designer, and then if I think it’s something that needs to be done, then I’ll give them the go-ahead. Often we’ll find that we have some past research that answers their question, or we know enough to make a decision without research, so really, the intake form saves us all a lot of time and effort.

Once a project is triaged and we’re happy it should go ahead, different designers need a different level of support conducting the research. Some of them we co-create the research plan together. Others are very confident and just need me to check it. Either way, I’ll always check it before the research goes out to customers, just as a QA check. 

Time allocation between supervising research and conducting it

MM: Out of your day, what percent of your time ends up being actual research you are doing versus time you spend supervising these other people that you’ve democratized out to?

PS: It’s a bit crazy at the moment. It’s probably like 60-70% supporting others at the moment. And that’s just because at least half of the designers hadn’t done research before the merger. I hope for that to reduce quite significantly over time. But yeah, it’s probably more than I’d want at the moment. 

MM: That’s a pretty common theme: people go into democratization naively assuming that, oh good, I’m just gonna hand off all of the evaluative work and then I can go do generative strategic studies and that’s gonna be my full-time job. And then it’s like, wait, I’m being pulled back into supporting all of this stuff. So it’s good to hear the reality of it. 

What would you like that percentage to end up being? 

PS: About 30%, ideally. What I’d love is to get to the stage where I’d literally just check the study before it goes out, just to make sure that there are no issues. We do that within the research team anyway. We check each other’s work. So it would just be like that. And then I’d just be there for any quick questions, but not feel like I am actually building it with them. 

And it is already reducing. Within almost a year it’s reduced quite significantly. It used to be all my time at first. So I’m very happy with how everyone’s doing. 

Democratizing is a multiyear process

MM: I think that’s a good thing to get across to people. Number one, it sounds like this is a multi-year process. And number two, it can get better over time. It’s not that you’ve got an infinite support burden forever, it’s a transitional thing. 

PS: One challenge we’re having now is not getting people to think about research; it’s quite the opposite. Everyone wants to do all the research and test everything. And actually I’m trying to educate people on the fact that, sometimes you have enough confidence in your design, it’s low risk and you have all this expertise and knowledge behind you, so you don’t always need to back everything up with research. It’s a good problem to have.

KL: Something I really love about working with Poppy is we discuss that. Do we actually need to do this research or do we have enough information where we’re confident? What’s the risk? 

I think that’s been really great. We seem to be always pretty aligned on that. And then that really helps us focus on the things that’ll have a bigger impact. 

How to choose between perfect and good enough

MM: Let me dig in a little bit on something you just said. You were talking about, okay, it’s not perfect, but it’s good enough. When I talk with researchers, it feels to me like they really want the research to be perfect. Like, this is why I went and got a master’s degree, or this is why I got a PhD, is to do the greatest outstanding research. So how do you get comfortable with research that is not perfect, but is good enough? How do you know where that line is, and how do you get to the point where that’s not nagging at you constantly in the back of your mind that you’re not doing stuff right?

PS: Yeah, <laugh> it’s tricky and I’m a massive perfectionist, so I’m probably not the best person to ask, but I think for me, support and trust from the team really helps. I’ve been in companies before that have kind of fed into that perfectionism by nitpicking the research and being like, “why did you do that?” Or, “you asked that in quite a leading way”, which in turn perpetuates the perfectionism because you feel like you have to cover all bases. And actually, I think I’m quite lucky at UserTesting that it doesn’t have to be perfect, and I don’t have to go into a huge amount of detail. I don’t need to explain my approach in massive depth because people trust that I’ve taken the right approach. 

KL: Being in product management for 12 to 15 years, I think you just get comfortable with ambiguity and imperfect information and having to drive decision making. That’s just the job. I’m someone who doesn’t really believe you’ll ever have perfect information. But there was a really great analogy on one of our podcasts. I can’t remember who the guest was, but the analogy he made was to Wheel of Fortune. It’s like, how many of the letters do we need to turn around so we can be confident in our guess? And I think that analogy is perfect for software. because you’re always making bets and you’re like, how much information do we need to make a confident bet that we feel is low risk? 

And sometimes it comes down to, you need less information for a lower risk bet. Are we just spending a couple sprints on this? Maybe we just need 50% or 75% confidence. Or are we spending six months or a year on a huge project? We’re going to want to spend a lot more time, turn a lot more letters around and be way more confident in our bet. 

MM: It sounds like a nice back and forth where Poppy, you’re the expert on gathering the information and understanding the quality of it and how to do it right. And Kevin, you’re the expert on how much info you need to make a particular decision. How critical is this decision? And you can kind of rely on each other if you set up the dialogue. Right?

KL: Correct. But I also rely on Poppy’s expertise and opinion to ensure we agree that we have enough information to make a decision. I may think we do, but I always like to make sure that she feels confident as well. 

Build an equal partnership

MM: Why doesn’t this partnership happen automatically at every company? What you’re describing sounds very logical to me and, gee, that’s the way I would want to work. Why do you think it doesn’t work at a lot of places? 

KL: I came from a company that didn’t have a dedicated research team. At my last company we had product designers who were a bit more senior and did some of the research, or worse either I did it myself or PMs on my team did it. Or we relied on whatever anecdotal information we could get. And so I just didn’t have that luxury of having some dedicated resources. Then coming into UserTesting, especially after we merged and Poppy and I got aligned, I wasn’t used to working that way. And so I’ll be totally honest and admit that I probably wasn’t the greatest partner when Poppy and I first started working together. But we had open conversations and we improved some things and worked on that partnership. And I think it’s just been getting better over time. 

We did an in-person onsite, maybe three or four months ago, where leadership across design, engineering, research, and product got together. We all got aligned on what we want to achieve, talked about some areas where we thought we were having friction or things weren’t working well. And I think since then, our partnership has really grown. 

It’s not easy because you need to work at it and maybe if, like myself, you didn’t have a background of having that kind of dedicated resource, you need to change a little bit of the way you work and be a little bit more open-minded. But I think it’s like anything, if you work at it and just try to constantly improve, then you can build those good habits over time. 

PS: I’d agree. Credit to Kevin, he’s been so open-minded. I gave feedback very early on and he took it on board and changed how we work. As a result, I feel like it became less of a one-way thing. I’ve had it in past companies where product managers won’t ask your opinion on what research needs to be done, but instead will request your help, only in the moment that they need you. 

What we have now is a lot more of a partnership. Kevin will bring research questions to the table in meetings and we’ll discuss them together. I now feel like I’m an equal partner in that leadership team, alongside design, engineering and product. 

MM: Kevin, how do you feel? You’ve now got a researcher who is a partner in making decisions, and you’ve also got designers who are empowered to go off and do their own research. I could picture a product manager feeling profoundly uncomfortable, like I’m losing control of the overall process because now I’ve got other people I’ve gotta work with. Who knows what crazy stuff they’re gonna come up with? Is this a potential recipe for disaster from a product manager perspective? 

KL: I really lean on that word partnership. I think it’s important to build that with research, design, and engineering. It’s not a one-way street. Ultimately the product manager needs to make the decision on the priorities and the problems we’re gonna solve. But if you involve product design and your engineering partners on that, get their input and get aligned, then they’re part of that process. And I think the same thing needs to happen once things get into more of  the design aspect. And even in the engineering around the build, I think ultimately the product designer’s going to own the final call in the UX, but they should be involving their product and engineering partners along the way to get input and feedback and adjust. 

I think if all the functions have their area of expertise and really own driving decisions in their area, but involve the others to create that two-way street, then what you described I don’t really worry about because I think that team is working together around a shared outcome and I think they’re in lockstep together. 

That’s just the way I try to build partnerships with other functions, and it’s worked well for me to this point. I think the one thing that was different for me, was how to build a partnership with research. And to be honest, in a leadership role, I sort of feel like my partnership with research is closer than design. Because a lot of the design work sort of happens more at the initiative level and less at the strategic planning level. We do some design conceptual explorations, but I just feel like the research to make some of the more strategic decisions and prioritization is much more closely aligned with the work that I do in my role. 

In a leadership role, I feel like my partnership with research is closer than design. A lot of the design work happens more at the initiative level. I feel like the research to make the more strategic decisions is much more closely aligned with the work that I do in my role.

MM: So it’s really the direction setting as opposed to the detailed tactical implementation of exactly how it’s getting done. It’s the where and the why rather than the how, if I’m understanding right?

PS: I guess the difference with the PMs that sit under Kevin is that they are a lot more involved in that end product. So imagine Kevin said something is the priority, let’s work on that. Then I’ll go to the PM in that team and the designer and we’ll have a meeting to discuss it, and work through the research plan together. They’ll give me feedback, and off the back of the research, we’ll all align on the next steps as a triad, and the designer would then probably design something based off of that research. It’s all very collaborative. 

So I guess it does feed up to Kevin in the sense that his PMs then have confidence in the design, where it came from, and the rationale behind it all, which probably helps. 

MM: By the way, do you democratize to PMs as well or is it pretty much just designers? 

PS: Yes, the PMs are allowed to do research. I don’t think we have any currently doing it in my product areas, but in other areas some have done research before or came from a research background, so they’re the exception. 

How to develop the PM-research partnership

MM: Let’s talk about developing that relationship. Suppose we’ve got some people who love the partnership story that you just told — could be a PM, could be a researcher, could be a designer — and they want to start working in this direction. Do you have any advice for them on how to get started, problems to watch out for, any of those sorts of things? 

KL: I think the most important thing is to get aligned on what your objectives are and what you’re trying to achieve. It could start with a really small project and getting aligned on it. And then I think when you’re aligned on what you’re trying to achieve, then it becomes much easier to work together on how we are going to do that.

I think back to that offsite where I really feel our leadership team got much better aligned and became better partners, because we all came together as a new leadership team after the merger. So it was almost like we all joined a new company at the same time. So we had a safe space to talk about what’s working and what’s not. And then we really focused on improving some of those things that weren’t working. And I think just getting together in person helps, as well as building some trust and getting to know each other. 

I think the human and people side of things can’t be understated when you’re building a partnership. You really need to be honest with each other and you need to trust each other and also challenge each other. It takes time to build that trust, but I think over time you can do it. And so I think you just need to work at it. 

PS: I’d agree. I think trust and being a partner is really important. I also think just having a regular meeting in the diary — it sounds quite simple, but Kevin and I, once every two weeks we have a checkpoint where we go through the research backlog, check what the priority is, and discuss any new research requests. 

KL: At one point, I asked Poppy if we should just roll that meeting into one of our leadership meetings and she said, no, it’s really great to have that one-on-one conversation to make sure we’re aligned on priorities and go through the research backlog. And I think that meeting has really helped us stay aligned and have that two-way conversation around the things we’re thinking about, like where do we think we have enough information? Where is there a lot of risk or low confidence? Anything new that’s come up? I think that meeting has really helped us stay in sync, but also build the partnership slowly over time. 

MM: Do you have any specific examples of things that either were problems you really learned from or things that were particularly successful that you’d love to share? Help bring this to life with some specifics for people.

PS: I’ve got one example. Based on the merger and the plan to have a consolidated platform, we were deciding what would be the best base platform for our Live Conversation, the moderated test offering. There was one camp who thought it should be the UserTesting platform, one camp who thought it would be UserZoom. It was actually blocking us and we couldn’t come to a decision. 

Kevin and I discussed it and decided to run some research to understand what the customer perspective was. So of all the features that are deltas in one or the other, what are the priorities? So we ran a survey where we categorized things into must-haves, should haves, nice-to-haves, and not needed, and ranked them accordingly. Within a week we got a list of priorities for the features. And based off of that, it was clear that the top five features that were all must-haves all sat within the UserZoom platform. It made that decision so much easier because we knew that would be the best base platform for our customers. 

KL: Yeah, that was a really good example where we essentially put the time constraint of one week to do some technical and design exploration because we also wanted to understand the constraints around this decision. And one week to pull together some research to understand what our customers would value the most from each solution. And credit to the team, they put together all of that into a single document: the research, the technical considerations, and some UX and design considerations. And it was really clear based on all the information what the decision should be. It helped everyone put their opinions aside. 

MM: If you hadn’t had the research, what do you think might have happened in that conversation? Would you have reached the same place but it would’ve taken longer? 

KL: I think we probably would’ve made a decision just based on opinions from the biggest job titles in the direction they wanted us to go. And that could have ended up being a really costly mistake. 

But kudos to the team. We got all the information we needed in one week and then everyone felt really confident. And now anytime anyone comes in and sort of challenges why we made this decision, we have a Confluence page that has the technical exploration, all of the research that was done, and the rationale as to why we made the decision. And so instead of having to book a meeting and go through it all again, we send them to the page. And it’s been really helpful to get everyone aligned. 

MM: What about struggles? What have been the hardest things to deal with in all of this? And how have you dealt with them? 

PS: It seems like an obvious one, but one of them is time zones. I really want to be an equal voice in the room, but most people in the leadership team are US-based. Kevin, for example, is in the same time zone as the engineering lead, so they speak a lot. At first I was feeling quite removed from where the decisions were being made, but it was tricky because I understood where it was coming from. I just wasn’t awake at the time of their meeting. 

I think the turning point was that leadership offsite that Kevin mentioned, because we were all in a room and I felt like we all had an equal voice. And now that we’ve done that, we have a regular time in the diary to connect, and I think if there was a big decision, they would wait until me and Eli, who are in Europe, were awake. 

Research versus the loudest voice in the room

KL: The thing I love about the information we get back from the research team is that it lets us prioritize with evidence. Like in Slack, you’ll start to hear a lot of noise about some issue — for example, virtual focus groups and focus labs. I’ve been hearing this come up a lot in Slack and you start to think as a product manager, it sounds like this is becoming a big problem for our customers. Should we invest in this now? 

And then Poppy did some research comparing it to other problems we could solve, and it landed on the bottom of the list in terms of needs and value to customers. So it’s a really good example of how the loud minority can make it feel like a big problem. 

At my past product management job, for sure we would’ve probably put that on the roadmap and built some sort of MVP to solve for it. But the research very clearly showed that we shouldn’t focus on that before we focus on these other things, because our customers just don’t value it as much.

At my past product management job, we would’ve probably put that on the roadmap and built some sort of MVP to solve for it. But the research very clearly showed that our customers just don’t value it as much,

MM: It reminds me of early on, when I joined UserTesting, I talked with a few product people who were using it to do this kind of work. And there was a typical quote I would get from them: “I thought I really knew our customers, and then I did some tests and I found out I was so wrong.” It’s almost like they had a religious awakening. So your example really resonates with me. 

PS: I remember at previous companies, people would say “we need to do this because this client said it,” or “we need to do it because we’ve heard that over and over again.” I now ask people, is there any research to support that? Because you end up getting blurred by the loudest voice in the room.

The opinions expressed in this publication are those of the authors. They do not necessarily reflect the opinions or views of UserTesting or its affiliates.