OpenAI's Whisper AI Faces Criticism for Inaccurate Medical Transcriptions
The widespread use of Whisper in healthcare settings raises concerns over fabricated and potentially harmful transcription errors.
- Whisper, an AI transcription tool by OpenAI, is criticized for generating false information, known as 'hallucinations,' in medical settings.
- Researchers have found that Whisper often adds non-existent content, including violent and racial remarks, to transcriptions.
- Despite OpenAI's warnings against using Whisper in high-risk domains, the tool is employed by over 30,000 clinicians and 40 health systems.
- Nabla, a company using Whisper, deletes original audio recordings, preventing verification of AI-generated transcripts.
- Experts emphasize the need for caution and higher standards in using AI for critical tasks like medical transcriptions due to potential misdiagnoses.