Summary

Many observers are predicting that generative AI will transform healthcare extensively, but there’s a lot of disagreement on exactly what will happen and why. For example, Boston Consulting Group is focused on immediate actions and impacts, while the Kellogg School of Management talks about longer term trends and the role of public opinion. The question being asked by many is, “Will AI eventually replace doctors?” but we think an even more compelling question would be, “Is this even what patients want?”

That’s the question we asked in a large study of consumer attitudes toward AI in healthcare. We found that while there is significant interest in using AI technologies like ChatGPT to potentially serve as a proxy for doctors, there is also considerable apprehension (our research indicates that only 7% of the US online population has ever used generative AI to answer a health-related question). Those worries must be addressed before most people will be comfortable with involving AI in their healthcare. The most important worries are: 

  • How will healthcare companies protect the sanctity of personal health information in the digital realm?
  • Is there scientific evidence that AI-driven advice is reliable? 
  • And, should the need arise, can users swiftly transition from conversing with a machine to the empathetic ear of a human healthcare professional?

The generative AI leaders are targeting healthcare

The public release of ChatGPT in November 2022 set off a global phenomenon of generative AI usage and adoption. It took only five days for ChatGPT to reach over one million users, and by January of 2023 it was estimated that the platform had reached over 100 million users, making it the fastest growing platform in history at that time.

A similar pace of AI adoption is being seen at the enterprise level. This is evident in a recent Gartner poll indicating that 70% of organizations are currently in the exploration stages of implementing generative AI into their business functions and 19% are in a later pilot or production stage.

Sam Altman, the CEO of OpenAI (the company that operates ChatGPT), enthusiastically describes the possibilities for generative AI systems to greatly expand our scientific knowledge and specifically calls out the ability for these systems to help cure human disease. In a message on Twitter (now known as X), he praised companies leveraging ChatGPT to expand access to health information to patients who would not otherwise be able to afford it. This is a view supported by many who identify healthcare as one of the top industries expected to be disrupted through the widespread proliferation and adoption of generative AI.

This example was echoed by Kevin Scott, the Chief Technology Officer at Microsoft. While being interviewed in a recent podcast, he told an anecdote about his immunocompromised brother who lives in rural central Virginia with limited access to quality medical care. Scott imagined a scenario in which patients like his brother would be able to access better medical information through generative AI than they could get from a local doctor. And a recent study suggests that ChatGPT is already as good or better as a doctor at diagnosing patients.

But not everyone shares an optimistic point of view about AI in medicine. Gary Marcus, professor emeritus of psychology and neuroscience at NYU and a long-respected authority on the topic of AI, said AI chatbots have been known to provide false and inaccurate information. Marcus pointed out, “We know these systems hallucinate, they make stuff up. There’s a big question in my mind about what people will do with the chat style searches and whether they can restrict them so that they don’t give a lot of bad advice.”

Dr Nathan Price, Chief Science Officer of Thorne HealthTech and co-author of The Age of Scientific Wellness, shared an alarming story in which an AI system is asked to provide a solution to prevent cardiovascular disease. The system confidently responded with a recommendation to give everyone carcinogens so that instead of dying of heart diseases, they would all die of cancer.

How do patients feel about generative AI in healthcare?

Because there is so much disagreement, we wanted to ask actual patients what they think. We surveyed them in the US, and followed up with video self-interviews, all through the UserTesting platform (details at the end of this article). We found that most people are not yet ready to start putting their faith in the medical recommendations provided by AI chatbots.

As a baseline, we asked our participants to identify how often they used online information sources when they had questions about their health. Most people who have Internet access have used it to look up health information.

Among the people who have used the Internet for health information, search engines and healthcare websites like WebMD are the most often-used online resources:

A chart shows that web search is most often used online source for health information, followed by a health care website, email or text to a doctor, and the website of their doctor. Least commonly used were social media sites and YouTube.

We then asked what specific types of information they research online. 82% look up information on symptoms, and 75% learn more about a medical condition that they or someone they care about has been diagnosed with:

A chart shows that the information most commonly looked up online is, in order: Looking up info on symptoms, learning more about a medical condition, double-checking something a doctor told them, and choosing a doctor. Least common is to shop for medical insurance.

We then dug into the emerging role of generative AI. Our first finding was that despite all of the buzz, usage of ChatGPT and services like it is still relatively low. Only 16% of people with Internet access have ever used ChatGPT or a system like it. Of that 16%, a little under half have used it for healthcare questions (7% of the total population).

Among the small number of people who have used ChatGPT for health questions, 87% were at least somewhat satisfied with the information that they received, a very good score. 

In online self-interviews, those ChatGPT users said they are using it to seek potential causes for symptoms that they were experiencing, treatment for existing health conditions, and to research side effects from recent surgeries and medications that they were taking. They told us that they liked how easy it was to quickly receive direct answers to questions rather than having to weed through the typical deluge of information returned from Google searches or through sites like WebMD:

People who use ChatGPT for personal health explain how and why they use it.

Trust is central to generative AI adoption in healthcare

The issue of trust is one that we explored at great length in our research. We asked participants to indicate how comfortable they would be using an AI chatbot in a number of different healthcare scenarios. None of the scenarios scored particularly high, but getting medication reminders and scheduling appointments scored the least badly.

The activity that scored worst, by a substantial margin, was using ChatGPT to discuss mental health-related questions.  More than half of our participants indicated a low level of comfort  in potentially using an AI chat system to discuss mental health issues:

Chart shows comfort levels around the use of Chat GPT in various healthcare activities. Comfort was least bad for medication reminders and scheduling appointments. Discomfort was highest for discussing mental health and managing chronic diseases.

Rating scale: 1=not at all comfortable, 6=extremely comfortable.

We asked people to explain why they felt so uneasy using an AI chatbot to discuss mental health-related questions. They told us that trust, compassion and empathy-building with a health professional are crucial components of successful mental health treatment. It’s hard to imagine a patient building an empathy-based relationship with a chatbot. And as we know from a notorious example, empathy is something that AI chatbots still have quite a bit of difficulty with. 

Why many people are uncomfortable using generative AI for mental health care

These are pivotal insights for the healthcare and healthtech industries when you consider that mental health is often identified as a promising use case for AI chatbots. There are already quite a few dedicated mental health chatbots available to patients on the market today. This may be a challenging business model if most patients still strongly prefer to seek mental health treatment from an actual human.

But the mental health concern was just an extreme example of the uneasiness that many people feel about generative AI in healthcare. We asked participants about a wide range of potential concerns, and many people rated all of them as sources of concern, including fear of errors, unreliable information, loss of privacy, and even lack of certainty that a patient would be able to input the right question prompts or that they would be correctly understood by the AI chatbot.

Chart rating concern levels over a variety of potential healthcare AI problems. Everything is rated as a moderate to very great concern, including fear of errors, fear of misunderstandings, and concerns that information may not be reliable.

For anyone creating and designing experiences that give patients the ability to use an AI chatbot in place of healthcare professionals, it is important to take note of these concerns. Roughly one third of the participants in our study had expressed the highest level of concern within each of the scenarios of using an AI chatbot in a personal health setting. In our interviews, some participants even expressed fear of what they described as a dystopian future in which only the wealthiest among us would have access to qualified humans when seeking medical assistance. 

Information accuracy was also a major issue identified by the participants that we talked to. Many individuals in our study told us that they were not sure if guidance from an AI chatbot could be trusted. They pointed out that many chatbots still do not cite credible sources as the basis of information that gets returned from user prompts. Some participants told us that they wouldn’t be confident that they even know how to effectively craft a prompt to obtain useful health guidance.

General concerns about using generative AI in healthcare

How to increase confidence in generative AI

While much of the feedback that we received centered around uncertainty and skepticism, some people did express excitement and optimism that generative AI could enhance the patient experience, or at least transform the bad parts of it. 

Participants expressed optimism about ChatGPT’s potential to serve as an alternative for medical professionals, particularly for patients lacking healthcare access. They also conveyed dissatisfaction with the inefficiencies of healthcare administrative staff and a waning trust in the expertise of health professionals. Many believe that an AI chatbot could provide quicker and more dependable healthcare services, viewing technology like ChatGPT as a viable option for addressing these issues.

People describe how generative AI might improve healthcare

We gave people a list of changes that might make them more comfortable using an AI chatbot in healthcare. Everything rated highly. The highest score went to the ability to escalate a conversation to a human healthcare professional if needed. Participants also indicated that getting scientific evidence of the accuracy of the information presented would be helpful, as was certainty that their privacy would be protected.

A chart listing the things that would make people more comfortable with ChatGPT in healthcare. Everythig helps, but the top responses are ability to escalate to a human, privacy protection, and scientific evidence of reliability.

In follow-up interviews, people gave more details on why they’re worried about privacy. There are significant fears that health data and information entered into a chatbot may be misused in the future, for example to determine eligibility for insurance and care. For anyone designing experiences leveraging generative AI in a healthcare setting, it is vital to convey to users a sense of robust privacy protections.

People describe what would make them more open to AI in healthcare

Conclusion: To drive adoption, create trust

Trust remains both the biggest barrier and potential opportunity in the adoption of generative AI in healthcare. There are hidden landmines throughout the intersection of AI and personal healthcare. If you are thinking about how generative AI fits into your patient experience, there are four crucial considerations to design around:

  1. Show patients that their privacy and personal health information will always be protected. It is essential to demonstrate to patients that interactions with generative AI adhere to HIPAA or the equivalent standard norms and regulations that safeguard personal health information. This is crucial to building their trust.
  2. Wherever possible, show evidence of scientific rigor in any medical advice or recommendation dispensed by AI chatbots. Patients want to know that the information they receive is accurate and backed by scientific research. The current experience commonly offered by AI chatbots is not enough to gain the trust and confidence of patients. 
  3. Make it easy for patients to reach out to an actual person, even if their experience is guided by a generative AI chatbot. For most healthcare scenarios, patients want to know that they can escalate to a human healthcare professional if they need to.
  4. Keep your finger on the pulse of the needs and sentiments of patients. With AI technology changing rapidly, attitudes toward it are also evolving quickly. Don’t assume that today’s answers will resonate tomorrow. Talk to your customers and your stakeholders on a regular basis. 

Research methodology: In summer 2023 we surveyed 1,055 randomly chosen US adults using the UserZoom platform. The survey was followed up with online self-interviews via the UserTesting platform to explore the reasons behind the survey findings. The interview participants gave permission to use their video comments in this report.

Image created by Dall-e in the style of Rembrandt’s The Anatomy Lesson of Dr. Nicolaes Tulp

The opinions expressed in this publication are those of the authors. They do not necessarily reflect the opinions or views of UserTesting or its affiliates.