• Thriving Guide
  • Posts
  • Why You Should Never Trust AI Chatbots for Medical Advice

Why You Should Never Trust AI Chatbots for Medical Advice

ChatGPT and similar tools can offer quick info but when it comes to your health, the risks may outweigh the convenience.

It’s tempting to turn to AI for quick answers when you're feeling unwell or confused by a medical term. But a troubling real-life example proves that using AI chatbots for health advice may do far more harm than good. A 60-year-old man swapped his table salt for sodium bromide after consulting ChatGPT only to suffer bromide toxicity and spend three weeks in psychiatric hospitalization.

While AI tools like ChatGPT can seem helpful and even convincing, they’re not equipped to guide your health decisions safely or accurately. Here's why experts urge caution and how to use AI tools more wisely when it comes to your well-being.

Why AI Chatbots Fall Short on Personalized Health Advice

Unlike your physician, AI tools don’t know you, your health history, or your medications. That means they can’t account for underlying conditions, allergies, or the nuance behind your symptoms. According to a 2024 Pew Research survey, nearly 58% of U.S. adults believe AI-generated health content is “somewhat reliable” but that belief may be dangerously misplaced.

“AI chatbots provide very generic answers,” explains Margaret Lozovatsky, MD, vice president of digital health innovations at the American Medical Association. “They’re not a substitute for medical expertise or personalized care.”

Even well-meaning searches can lead to harmful outcomes when AI provides guidance without context or clinical judgment.

AI Answers Can Be Outdated or Flat-Out Wrong

Generative AI pulls from past data some of which may be outdated, incomplete, or factually incorrect. For instance, recent CDC guidance recommends everyone 6 months and older receive the updated flu shot, but AI systems might not reflect that change immediately.

What’s more, these tools are designed to sound confident even when they’re wrong. That can make it harder to recognize misinformation. A 2023 study in the journal Nutrients found that while popular chatbots could generate weight-loss meal plans, most failed to meet basic macronutrient balance for health and sustainability.

So, even if a chatbot gives you an answer that sounds right, it might not be accurate or safe.

Privacy Risks Are Another Hidden Danger

AI chatbots aren’t governed by medical privacy laws like HIPAA. This means that any personal health information you enter could potentially be stored, shared, or used in ways you didn’t consent to.

“Never input private health data into a generative AI tool,” warns Ainsley MacLean, MD, a health AI consultant. “There’s no guarantee that information won’t be exposed.”

If you’re using AI to gather background knowledge, stick to general searches. Never include names, medications, test results, or other identifiable health information.

How to Use AI for Health Without Putting Yourself at Risk

While AI isn’t a replacement for your doctor, it can help you feel more informed going into a medical appointment. Use it to learn the basics about a condition or to look up definitions for medical jargon then bring that information to your provider.

When reviewing AI-sourced summaries:

  • Check if the information comes from a reputable source, like the Mayo Clinic or a peer-reviewed medical journal.

  • Verify the date to ensure the guidance is current.

  • Share anything you find with your doctor before making changes to your routine.

And always ask: Is this accurate? Does this apply to me?

Bottom Line

While generative AI tools can be powerful, they’re not yet ready to serve as your virtual doctor. Relying on them for anything beyond general education can lead to misinformation, privacy risks, and, in extreme cases, serious health consequences. The safest approach? Use AI to supplement not replace your relationship with a trusted healthcare provider.

If you found this helpful, share it with someone you care about or subscribe to our newsletter for more.