Overview
- Reporters found the transcripts via Google, Bing and DuckDuckGo on Aug. 21 after conversations shared on Grok’s site were automatically indexed.
- Published chats showed Grok giving instructions for making fentanyl and meth, building explosives, coding malware, and methods of self-harm.
- Subsequent replies displayed updated safety behavior, refusing violent requests and pointing distressed users to support resources.
- xAI and Elon Musk did not respond to requests for comment as pressure mounted over the bot’s privacy and safety practices.
- Separately, Musk filed a Texas federal lawsuit against Apple and OpenAI alleging they suppressed competition and reduced visibility for X and Grok in Apple’s App Store.