Skip to content

As brands race to prepare for an agentic AI future, healthcare should be careful not to confuse rising usage with rising trust.

That distinction matters right now. In a recent issue of Harvard Business Review, it was argued that brands need to adapt to a world in which AI agents increasingly shape how consumers research and buy. In many categories, that shift may feel inevitable. In healthcare, it looks more complicated.

A new KFF poll found that 32% of U.S. adults have used AI chatbots for physical or mental health advice in the past year. Most of those users said speed was a major reason: 65% wanted quick or immediate information, while others used AI before deciding whether to see a provider or because they felt more comfortable looking up health questions privately.

So yes, consumers are using AI in healthcare. But that does not mean they are comfortable letting it take over.

KFF’s findings suggest something more nuanced. Even as people turn to AI for health advice, 77% of adults say they are concerned about the privacy of personal medical information provided to AI tools. And while many users report being somewhat satisfied with AI responses, only a minority say they are very satisfied. Earlier data from Salesforce points in the same direction: 69% of U.S. adults said they were uncomfortable with healthcare companies using AI to diagnose them, even though more than half were comfortable with AI for nonclinical tasks such as scheduling appointments and estimating medical expenses.

That split is the real story.

Consumers appear open to AI when it helps with tasks that feel:

  • Administrative
  • Objective
  • Low risk

They become more hesitant when AI moves closer to:

  • Judgment
  • Diagnosis
  • High-stakes decisions

Academic research helps explain why. Findings published in the Journal of Consumer Research showed that consumers are reluctant to use healthcare provided by AI in part because they believe AI is less able than human providers to account for their unique characteristics and circumstances. Separate research published in the Journal of Marketing Research on “task-dependent algorithm aversion” found that people trust and rely on algorithms less when tasks seem subjective rather than objective.

Healthcare is full of decisions that feel consequential, contextual and personal. That makes it a natural domain of resistance.

There is another wrinkle here. KFF found that among adults who used AI for physical health advice, 42% did not follow up with a doctor or provider afterward. Among those who used AI for mental health advice, a majority did not follow up with a mental health professional. That does not just suggest experimentation. It suggests AI is already shaping behavior in ways healthcare brands and providers cannot afford to ignore.

The implication for healthcare marketers is not to avoid AI. It is to be more deliberate about where AI belongs.

The strongest opportunities may be where AI reduces friction without trying to replace human judgment: helping people understand symptoms in plain language, prepare for appointments, navigate benefits, estimate costs or find the right next step faster. The bigger risk is assuming that because people use AI, they want it to make consequential decisions on their behalf.

Healthcare consumers may be ready for AI as a tool. They are not ready for AI as a decision-maker.

For healthcare brands pushing toward agentic experiences, that is an important line to respect.

About the Author

Michele Loeper

Michele Loeper

Lead Strategist

Michele helps healthcare brands exceed goals by conceptualizing and implementing multichannel marketing communications strategies. Michele brings a solid understanding of business and marketing for both B2B and B2C healthcare organizations and those who work in highly complex regulatory environments.

Connect on LinkedIn

Headquarters

471 JPL Wick Drive
Harrisburg, PA 17111