Overview
- Researchers embedded invisible instructions in manuscripts—white text or tiny fonts—that can be parsed by AI review systems to bias them toward positive evaluations.
- Nikkei Asia and Nature investigations uncovered at least 17–18 preprints across institutions in eight to 11 countries using ‘prompt injection’ to game AI-based peer review.
- Stevens Institute of Technology and Dalhousie University have ordered the removal of implicated papers from circulation and launched institutional investigations.
- While some authors have withdrawn their work and apologized, others defend hidden prompting as a response to perceived AI-driven reviewer shortcuts.
- Publishers and research organizations are deploying AI-detection tools and crafting new ethical guidelines to reinforce the integrity of peer review.