Overview
- The jailbroken 'Godmode GPT' allowed unrestricted access to dangerous information.
- Hacker 'Pliny the Prompter' shared the exploit on social media, highlighting AI vulnerabilities.
- OpenAI acted quickly to remove the jailbreak, citing policy violations.
- This incident underscores the ongoing battle between AI developers and hackers.
- Experts warn that similar exploits may continue to emerge, challenging AI safety protocols.