Social media platform X, formerly Twitter, played a central role in the spread of false narratives and harmful content which contributed to racist violence against Muslim and migrant communities in the UK, following the tragic murder of three young girls in the town of Southport, Amnesty International has established in a technical explainer which was published today.
A technical analysis of X’s open-source code (or publicly available software) reveals that its recommender system (or content-ranking algorithms), which drives the “For You” page, systematically prioritizes content that sparks outrage, provokes heated exchanges, reactions and engagement, without adequate safeguards to prevent or mitigate harm.
“Our analysis shows that X’s algorithmic design and policy choices contributed to heightened risks amid a wave of anti-Muslim and anti-migrant violence observed in several locations across the UK last year, and which continues to present a serious human rights risk today,” said Pat de Brún, Head of Big Tech Accountability at Amnesty International.
On 29 July 2024, three young girls – Alice Dasilva Aguiar, Bebe King and Elsie Dot Stancombe – were murdered, and 10 others injured, by 17-year-old Axel Rudakubana. Within hours of the attack, misinformation and falsehoods about the perpetrator’s identity, religion, and immigration status flooded social media platforms, and were prominent on X.
Amnesty International’s analysis of X’s open-source recommender algorithm uncovered systemic design choices that favour contentious engagement over safety.
X’s algorithmic ranking system, revealed in X’s own source code published in March 2023, reveals that falsehoods, irrespective of their harmfulness, may be prioritised and surface more quickly in timelines than verified information. X’s “heavy ranker” model – the machine-learning system that decides which posts get promoted – prioritizes “conversation” – regardless of the nature of the content. As long as a post drives engagement, the algorithm appears to have no mechanism for assessing the potential for causing harm – at least not until enough users themselves report it. These design features provided fertile ground for inflammatory racist narratives to thrive on X in the wake of the Southport attack.
Further analysis of the system also uncovered built-in amplification biases favouring “Premium” (formerly Blue) verified subscribers whose posts are automatically promoted over those of ordinary users. Before official accounts were shared by authorities, false statements and Islamophobic narratives about the incident began circulating on social media. Hashtags such as #Stabbing and #EnoughisEnough were also used to spread claims falsely suggesting the attacker was a Muslim and/or an asylum-seeker.
An account on X called “Europe Invasion”, known to publish anti-immigrant and Islamophobic content, posted shortly after news of the attack emerged that the suspect was “alleged to be a Muslim immigrant”. That post garnered over four million views. Within 24 hours, all X posts speculating that the perpetrator was Muslim, a refugee, a foreign national, or arrived by boat, were tracked to have an estimated 27 million impressions.
The Southport tragedy occurred in the context of major policy and personnel changes at X. Since Elon Musk’s takeover in late 2022, X has laid off content moderation staff, reinstated previously banned accounts, disbanded Twitter’s Trust and Safety advisory council, fired trust and safety engineers and restored numerous accounts which had been previously banned for hate or harassment, including that of Stephen Yaxley-Lennon, a far-right activist better known as Tommy Robinson.
Inflammatory posts
Tommy Robinson told his 840,000 X followers that there was “more evidence to suggest Islam is a mental health issue rather than a religion of peace” further stoking hostility against Muslims.
X’s owner Elon Musk – who has 140 million X followers on his personal account – also notably amplified the false narratives that were exchanged regarding the Southport attack. On 5 August 2024, as the riots were spreading, he commented on a video posted by Ashley St Clair, saying “civil war is inevitable.”
UK Prime Minister Keir Starmer intervened, calling for the protection of Muslims amid waves of targeted attacks by mobs on mosques, refugee shelters and Asian, Black and Muslim communities.
In response, Elon Musk publicly retorted: “Shouldn’t you be concerned about attacks on *all* communities?”
Amnesty International’s analysis shows that in the two weeks following the Southport attack, Tommy Robinson’s posts on X received over 580 million views – an unprecedented reach for a figure banned on most mainstream platforms for breaching hate speech rules.
We wrote to X to share our findings in a letter dated 18 July 2025 and provided the company with an opportunity to respond. The company had not responded by the time of publication.
Time for Accountability
Amnesty International’s analysis of X’s design and policy choices raises serious concerns about how the platform’s recommender system functions in crisis situations. The way the system weighs, ranks, and boosts content, particularly posts that generate heated replies or are shared or created by “blue” or “premium” accounts, can result in harmful material being shared with larger audiences.
“Without effective safeguards, the likelihood increases that inflammatory or hostile posts will gain traction in periods of heightened social tension,” said Pat de Brún.
Where such content targets racial, religious and other marginalized groups, this can create serious human rights risks for members of these groups. X’s failure to prevent or adequately mitigate these foreseeable risks constitutes a failure to respect human rights.
Whilst regulatory frameworks such as the UK’s Online Safety Act (OSA) and the EU’s Digital Services Act (DSA) now establish legal obligations for platforms to assess and mitigate some systemic risks, these obligations must be robustly enforced to have any effect. X’s design choices and opaque practices continue to pose human rights risks that demand greater accountability, not just scrutiny.
Amnesty International is calling for effective regulatory enforcement and robust accountability measures to address the systemic harms created by X’s design choices. The UK government must also address remaining gaps in the current online safety regime to hold platforms like X accountable for broader harms caused by their algorithms.
Background:
British authorities responded to the Southport racist riots with a series of arrests, charging individuals who used X and other platforms for inciting violence or spreading malicious falsehoods. Some perpetrators received prison sentences for their social media posts. A July 2025 UK parliamentary report established that social media business models incentivised the spread of misinformation after the Southport murders.