Overview
- Amnesty International’s technical analysis of X’s open-source recommender code found that the platform’s engagement-first design systematically prioritizes content that provokes outrage, including misinformation and hate speech.
- The report revealed that within 24 hours of the July 2024 Southport attack, false narratives wrongly linking the attacker to Muslim, refugee or foreign identities amassed an estimated 27 million impressions on X.
- The algorithm also favors posts from premium verified subscribers, amplifying toxic content from far-right figures such as Tommy Robinson and Andrew Tate whose messages reached unprecedented audiences.
- Since Elon Musk’s takeover in late 2022, X has dismantled key moderation safeguards by laying off content staff, disbanding safety councils and reinstating accounts previously banned for hate speech.
- Amnesty and human rights groups are pressing UK and EU authorities to enforce obligations under the Online Safety Act and Digital Services Act while X has not publicly responded to the detailed findings.