Particle.news

Download on the App Store

ChatGPT Provides Incorrect or Incomplete Answers to 75% of Medication-Related Queries: Study

Despite its success in other areas, the AI tool's lack of accuracy in medical queries raises concerns, with one instance potentially leading to harmful drug interactions.

  • ChatGPT, a popular AI tool, has been found to provide incorrect or incomplete answers to nearly 75% of medication-related queries, according to a study by Long Island University.
  • The study highlighted a potentially dangerous instance where ChatGPT incorrectly stated that there were no interactions between the COVID-19 antiviral Paxlovid and the blood-pressure lowering medication verapamil.
  • Researchers asked ChatGPT to provide references for its responses, but only eight out of 39 responses included references, all of which were non-existent.
  • Despite these findings, ChatGPT has shown promise in other areas, outperforming human candidates in a mock obstetrics and gynecology exam and being perceived as more caring and empathetic than human doctors in a study.
  • OpenAI's usage policies state that its technologies should not be used to provide diagnostic or treatment services for serious medical conditions.
Hero image