Summary

ChatGPT is a very promising research assistant, with emphasis on the word “assistant.” It needs to be asked the right questions, and its responses can’t always be trusted. A well-informed human can use it to automate some time-intensive research tasks, such as analyzing and summarizing unstructured results. But it’s not a replacement for a skilled researcher today. Here’s why…

User researchers have been testing ChatGPT

One of the most interesting new products to appear in tech recently is ChatGPT, the “generative AI” system that can create convincing written reports and conversations. The most important question to ask about any powerful new technology is whether it can be used to solve real customer problems. Until that question is answered, the technology is likely to be stranded in limbo, with a lot of hype but little adoption (look at VR goggles if you need an example).

People working in human insight have been asking how ChatGPT and products like it will affect user research. For example,

  • Designer and engineer Shavin Peiries used ChatGPT to help analyze and organize quotes from interviews, turn them into problem statements, and brainstorm some ideas for marketing campaigns (link). He showed the exact prompts he used, and shared what he learned about how specifically those prompts must be worded. 
  • Dovetail created an interesting philosophical video speculating about the subject (spoiler alert: their conclusion is that it might be good or bad, but “it’s coming whether or not you like it”). 
  • From a more concrete perspective, two UserZoom employees have published their experiments and impressions:
    • Lawton Pybus, a UX research manager at UserZoom, attempted to use ChatGPT for several tasks related to creating a research study (article here). He described what he tried and graded ChatGPT on each task, ranging from creating the study outline (grade: A-) to creating the screener (grade: F). He found some great successes, but also some striking problems, including a new research question type that ChatGPT apparently fabricated (ChatGPT’s unpredictable tendency to fabulize is a consistent theme in many reviews).
    • Kuldeep Kelkar, UserZoom’s SVP of User Research Services, took a broader view of how ChatGPT, or a product like it, might be used in the overall research process. Like Lawton, he found some pluses and minuses. He posted his thoughts on LinkedIn. With his permission, we’ve reproduced his LinkedIn post below, with some added comments.

Kuldeep and Lawton’s articles are a good start at understanding what generative AI can do for human insight, but we’re all just getting started. It looks like generative AI used correctly can be a powerful assistant and productivity boost for knowledge workers, rather than a replacement. But it can also backfire if not thoughtfully used with carefully-constructed prompts, sanitized data, and vetted outputs. As with other powerful tools, everything depends on how it’s used. We all need to experiment and share insights as we unravel the potential of this very intriguing technology. Please share your ideas and experiences in the comments section below.

Meanwhile, here is Kuldeep’s post:

Five ways user researchers can use ChatGPT today

By Kuldeep Kelkar

Concrete examples and use cases for how user researchers can benefit from ChatGPT:

  • Create a research plan and script (to build a research study)
  • Draft a screener
  • Summarize findings (during analysis and reporting)
  • Brainstorm ideas or approaches
  • Develop a list of personas, or just double check that you have accounted for all end user personas

So what is ChatGPT? (in its own words)

ChatGPT is an AI-powered language model developed by OpenAI. It is a type of machine learning model that uses deep neural networks to generate human-like responses to natural language inputs.

ChatGPT is trained on a massive amount of text data from the internet, which enables it to generate responses to a wide range of questions and prompts in a human-like manner. It can answer questions, provide explanations, generate creative writing, and more.

ChatGPT has been designed to provide a conversational interface for users, allowing them to interact with the model through text inputs and receive responses in real-time. It is commonly used in applications such as chatbots, virtual assistants, and customer service tools, where the goal is to provide users with quick, accurate, and human-like responses to their questions and needs.

Disclaimer: The answer above was generated by ChatGPT 🙂

How ChatGPT performed on the tasks 

Task: Create a research plan and script (to build a research study)

I asked ChatGPT: 

How do I build a research study in UserZoom to understand patients’ information needs and pain points when searching for information on a specific medical service (I.e. flu vaccine, COVID-19 care)?

ChatGPT’s reply:

“Here are the steps to build a research study in UserZoom to understand patients’ information needs and pain points when searching for information on a specific medical service:

  1. Define your target audience: Determine the demographic criteria of the participants you want to include in your study, such as age, location, and health status.
  2. Create a new study: Within UserZoom, select the option to create a new study and give it a name.
  3. Add demographic questions: Add questions to the study that will help you determine if a participant meets the target audience criteria you have defined.
  4. Add task-based questions: Add questions that will ask participants to perform specific tasks related to searching for information on the medical service of interest, such as navigating to the relevant website, searching for specific information, or comparing different resources.
  5. Add open-ended questions: Add open-ended questions that will allow participants to describe their experiences and challenges when searching for information on the medical service of interest.
  6. Preview and launch the study: Preview your study in UserZoom to make sure everything is set up correctly and then launch the study to start collecting data from your participants.
  7. Analyze the results: After your study is complete, use UserZoom’s analytics tools to analyze the data you’ve collected. Look for patterns and insights in the data that can help you understand patients’ information needs and pain points when searching for information on the medical service of interest.
  8. Use the results to inform design: Use the results of your study to inform the design of your website or other resources to better meet the information needs and pain points of patients searching for information on the medical service of interest.

Comments

Mike: ChatGPT is very literal-minded. We asked it “how do I build a research study,” and it gave a competent outline of the steps needed to create a study, plus some unsolicited info on analyzing and using the results. If we wanted ChatGPT to actually create the research questions, we would have needed to ask for that specifically. Lawton’s article went into that in several ways, and he gave ChatGPT grades in the B to C range.

Task: Draft a screener (or double check if the screener you are using is missing anything)

I asked ChatGPT: 

Can you help with screener questions (actual questions and options) for high net worth investors for an upcoming usability test on UserZoom?

ChatGPT’s reply:

“Sure, here is an example of screener questions for high net worth investors for an upcoming usability test on UserZoom:

  • What is your current occupation?
    • a. Entrepreneur/Business Owner
    • b. Executive/Manager
    • c. Professional (e.g. doctor, lawyer, engineer)
    • d. Other (please specify)
  • What is your current annual household income?
    • a. Less than $75,000
    • b. $75,000-$149,999
    • c. $150,000-$499,999
    • d. $500,000 or more
  • Approximately how much money do you currently have invested in stocks, bonds, and other investments?
    • a. Less than $100,000
    • b. $100,000-$499,999
    • c. $500,000-$1,999,999
    • d. $2 million or more
  • Are you currently working with a financial advisor or investment manager?
    • a. Yes
    • b. No
  • Have you invested in any alternative investments such as real estate, private equity, or hedge funds?
    • a. Yes
    • b. No
  • Have you made any investment decisions online in the past six months?
    • a. Yes
    • b. No
  • Are you interested in learning more about new investment opportunities?
    • a. Yes
    • b. No

These screener questions can be customized to fit your specific needs and requirements. Make sure to test the screener questions with a small group of participants to ensure that they are accurate and effective in screening for high net worth investors.

Comments:

Kuldeep: A decent screener, pretty impressive. But it asked easy-to-cheat yes/no type questions. Generally good researchers will not ask yes/no type questions but keep it open ended with must select/may select.

Lawton: In the first question, it gave a very limited set of potential occupations. But I like the disclaimer it added at the end about testing the screener with a few participants.

Mike: Unfortunately, ChatGPT also didn’t recommend which question responses should result in a participant being selected or rejected.

Task: Summarize Findings (during analysis and reporting)

The task I gave ChatGPT:

I uploaded 150 written responses to a single question from a UserZoom research study to ChatGPT, and asked it to summarize them into an organized bullet list with key themes. The same can be done for open-ended responses and video transcripts.

The question asked in the study was: “Did you experience any of the following problems during this task?” Here are samples from the text comments we received:

  • couldnt do it without logging in first
  • Long list of doctors but no indication of their specialty
  • Had to go to an outside site
  • Cannot search by name
  • The search was odd in selecting what type of provider
  • I had to reset my password
  • was not available
  • I was not sure which network I was covered under – the choices did not match my card
  • unable to search for pcp
  • Tried twice, both times resulted in: No Results Found(I live in a small town)
  • No Dr. Smith w/in the first 10 pages
  • I think this survey had an error.  I got the message “A valid SubjectConfirmation was not found on this Response, Subscriber Id must be present”
  • no information that I could find on looking for doctors in my network
  • WOuldn’t take correct password AND correct security question answer
  • I was logged out during the search process
  • search I found was not by name only location and coverage
  • had to change location
  • I kept getting out of network doctors
  • wrong ID number according to the site
  • I needed to search in 2 states to find one close to me
  • extreme lag
  • Problems logging in, needed to get authentication code, etc.
  • It listed the specialty before the providers
  • Too many Smith
  • Couldn’t search by last name
  • not sure if he’s a primary care doctor
  • I just google it
  • Site doesn’t show the distance of the doctor from you which is essential
  • The search bar produced results too quickly. I had to hit “View All” instead of hitting “Enter” on my keyboard.
  • pop ups
  • because my employer is small and has the corporate office in SC, when you go on the website to search for your state, its not there until I actually log in.
  • Easy after I found the “find provider” link
  • this website never lets me get past login it always says it doesn’t reconize my log in info then i go by member I.D number it cannot identify me with that thn I try security question it still will not let me in this happens everytime i try to log in so I always end up calling them and telling them why I could not use the website then they take care of my issue over the phone not resolving when I could not log in

ChatGPT’s response:

Based on the responses, common issues experienced by users during the task include:

  • Difficulty logging in or accessing their account
  • Problems with searching for doctors (e.g. search results not accurate, unable to search by name or specialty)
  • Difficulty navigating the website and finding the right information
  • Slow website performance and technical difficulties (e.g. slow refresh, pop-ups, error messages)
  • Lengthy process and having to navigate multiple websites to find the information they need.
  • These insights suggest potential areas for improvement in the design and functionality of the website to better support users in finding the information they need.

Comments

Kuldeep: It did a pretty decent job. I asked ChatGPT to summarize additional groups of 150 responses, and the results were consistent.

Lawton: How common are these issues? It’s hard to prioritize from this list. Some of the issues are quite general and don’t point to any specific problem that a product team might use to inform changes. A quote or explanation would be helpful.

Regarding the comments themselves, I don’t see any sensitive information here, but it’s important to ask how these tools handle data and to consider the risk that proprietary company information or personally identifiable data might leak into the training set (more here).

Mike: I think this is where ChatGPT shines. It’s engineered to draw conclusions from a mass of unstructured Internet text, so it’s also good at drawing conclusions from unstructured text generated by a study.

Task: Brainstorming ideas or approaches

I asked ChatGPT:

How do I put together a mixed-method research plan for redesign of “find a doctor” webpages?

ChatGPT’s response:

A mixed-method research plan combines qualitative and quantitative research methods to provide a comprehensive understanding of the user experience. Here are the steps to put together a mixed-method research plan for the redesign of “Find a Doctor” webpages:

  • Define the research objectives: Clearly define the research objectives and questions that you aim to answer with your mixed-method research plan. For example, you may want to understand the current user experience with existing “Find a Doctor” webpages, identify key pain points and challenges, and gather insights into what users are looking for in a new design.
  • Conduct qualitative research: Qualitative research, such as user interviews, surveys, or usability testing, can provide in-depth insights into the user experience and help you understand the motivations, needs, and behaviors of your target users.
  • Conduct quantitative research: Quantitative research, such as online surveys or analytics data, can provide a more data-driven understanding of the user experience by measuring user behavior and engagement with the “Find a Doctor” webpages.
  • Analyze the data: Analyze the data collected from both qualitative and quantitative research methods to gain a comprehensive understanding of the user experience. Use tools such as statistical analysis or coding frameworks to process and interpret the data.
  • Synthesize the findings: Synthesize the findings from the qualitative and quantitative research methods to identify common themes, patterns, and insights into the user experience with “Find a Doctor” webpages.
  • Use the findings to inform the redesign: Use the findings from the mixed-method research plan to inform the redesign of the “Find a Doctor” webpages. Ensure that the new design meets the needs and expectations of the target users and addresses the pain points and challenges identified in the research.
  • By following these steps, you can develop a comprehensive mixed-method research plan that provides a well-rounded understanding of the user experience and supports the redesign of “Find a Doctor” webpages.

Comments

Lawton: Bravo for starting with the research objectives.

Mike: This is another example of ChatGPT’s literal-mindedness. Asked “how do I put together a research plan?” it gave a competent list of the steps needed to create a plan. What it didn’t do is actually make the plan. It also didn’t give any advice that’s specific to the medical market. I think that, at least for now, ChatGPT is going to be most effective as an assistant to someone who knows how to ask and interpret the right questions, rather than as a replacement for that person.

Task: Develop a list of personas or to just double-check if you have accounted for all end user personas

I asked ChatGPT: 

We are redesigning a large business application for insurance agents who sell life insurance. What personas should we design for?

ChatGPT’s response:

When redesigning a large business application for insurance agents who sell life insurance, it’s important to design personas to represent the different types of users who will be using the application. Some of the personas you may want to consider are:

  • Novice Agent: An agent who is new to the insurance industry and may need more guidance and support in using the application.
  • Experienced Agent: An agent who has been selling insurance for several years and is familiar with the industry.
  • Sales Manager: A manager who oversees a team of insurance agents and needs to monitor their performance and productivity.
  • Underwriter: A specialist who assesses the risk associated with insurance policies and determines if they should be approved or rejected.
  • Customer Service Representative: A representative who assists agents with resolving any issues or problems they encounter while using the application.
  • IT Support: An IT specialist who provides technical support for the application and helps resolve any technical issues that arise.

By designing personas for these different user groups, you can ensure that the redesign of the application meets the needs of all stakeholders and provides a user-friendly and efficient experience for all users.

Comments

Lawton: UX folks generally base personas off of data, whether collected in a formal study or from analytics. Although none was provided in the prompt, it’s interesting that ChatGPT just started shooting from the hip rather than asking for some data. Granted, its improvising is impressive and the summarize-findings task already demonstrated its capability of doing something useful with data. But this is further evidence that the outputs can only be as good as the inputs.

Mike: I was impressed by the amount of relevant detail in this response. However, Lawton’s point is a good one. I assumed ChatGPT was working off a reasonable base of general knowledge about the insurance business that it picked up from its training set. But since we don’t know what actual sources it used, it’s hard to trust the reliability of its conclusions. I think the word “trust” is going to come up over and over in future discussions about generative AI.

Kuldeep’s point of view on the future

The future is not written yet, either by humans or AI (that I know of). And yes, pun intended. 

I believe that AI-powered technology will surely play an assistive role and make humans more productive. Calculators did not take jobs away because people did not have to add/subtract manually anymore. 

AI-powered GPT will certainly have implications in the B2B world where several tasks can be automated (but a skilled human would have to take some action on it). There are several implications for support, chat, knowledge bases, education scripts, copywriting, marketing content, email responses, and more.

So my recommendation is to try ChatGPT and find use cases that will help your profession (be it  designer or researcher or any other profession).

If you have tried ChatGPT, please comment with use cases that you found valuable.

As you may have guessed, the image that accompanies this article was created by the Dall-E 2 AI-driven image-generation system. We asked it to generate a painting in the Hudson Valley style of a robot and human climbing a mountain together.

The opinions expressed in this publication are those of the authors. They do not necessarily reflect the opinions or views of UserTesting or its affiliates.