Particle.news

Download on the App Store

Study Warns of Inaccuracies in Chatbot-Provided Medication Information

Researchers highlight potential risks of relying on AI-generated drug advice due to frequent inaccuracies and complexity.

  • Researchers from the University of Erlangen conducted a study on the reliability of AI chatbots for medication information.
  • The study, published in 'BMJ Quality & Safety,' found that chatbot responses were often inaccurate and difficult for non-experts to understand.
  • Researchers evaluated 500 AI-generated responses to common questions about 50 widely prescribed medications in the U.S.
  • Despite improvements in AI technology, the study emphasizes that chatbots still pose risks to patient safety due to incomplete or incorrect information.
  • The researchers call for clear warnings that AI-generated information should not replace professional medical advice.
Hero image