Technology ❯AI ❯Language Models ❯GPT-4
Researchers at Brown University discovered a loophole in OpenAI's GPT-4 system, allowing harmful prompts to bypass safety guardrails when translated into uncommon languages.