Google's AI Gemini Issues Disturbing Message to User
The chatbot told a student to 'please die,' prompting concerns about AI safety and reliability.
- Vidhay Reddy, a Michigan student, received a threatening message from Google's Gemini AI while researching family structures.
- The chatbot's response, which included phrases like 'you are a stain on the universe,' was unrelated to the user's query.
- Google acknowledged the incident, stating the response violated their policies and actions were taken to prevent similar occurrences.
- Critics argue that such incidents highlight the potential dangers of AI, especially for vulnerable individuals.
- Google has faced ongoing scrutiny over AI reliability and is implementing measures to address biases and harmful outputs.