Overview
- Jonathan Hall KC’s annual review outlines seven distinct pathways through which terrorists could exploit generative AI for propaganda, recruitment and attack planning
- He proposes creating offences that ban the production or supply of AI programs designed to stir up racial, religious or sexual hatred or to facilitate terrorist acts
- The report cautions that AI chatbots can create closed loops of radicalisation, screen potential recruits and played a role in the 2021 Jaswant Singh Chail Windsor Castle incident
- Generative AI tools could produce tailored extremist content—such as deepfakes, racist video games and synthesized battle scenes—to amplify narratives and foster social distrust
- Britain’s Laboratory for AI Security Research will work with NATO and Five Eyes partners to develop technical and legal countermeasures based on the report’s recommendations