Skip to content
Search

Latest Stories

AI amplifies anti-Muslim hate in UK: Study

The report warned how contemporary far-right extrem­ism is “no longer confined to domes­tic networks or isolated online eco­systems, but is increasingly shaped by transnational actors, foreign influence, and rapidly advancing dig­ital technologies”.

AI anti-Muslim hate UK

Social media platforms have been exploited by extremist networks using bots and recycled accounts to create a false impression of widespread support, the report warns.

Leon Neal/Getty Images

FOREIGN extremists are using arti­ficial intelligence and social media manipulation to fuel anti-Muslim hatred in Britain, with online incite­ment rapidly translating into real-world attacks on mosques and com­munities, new research revealed on Tuesday (10).

The Tell MAMA report warned how contemporary far-right extrem­ism is “no longer confined to domes­tic networks or isolated online eco­systems, but is increasingly shaped by transnational actors, foreign influence, and rapidly advancing dig­ital technologies”.


Titled The Risk of Foreign Influence on the UK Far-Right and Anti-Muslim Hate, the report found out that AI was used to generate racist imagery and propaganda, falsely depicting Mus­lims as violent and glorifying attacks on police and public infrastructure.

“Artificial intelligence is acting as a force multiplier for anti-Muslim hate, driven by international actors based in Russia,” said Iman Atta OBE, direc­tor of Tell MAMA, which measures and monitors anti-Muslim incidents across the country.

“It allows small extremist networks to appear larger, more credible and more threatening than they are.”

Researchers warned that current counter-terrorism frameworks, which “remain largely structured around domestic threat models”, cannot ade­quately respond to threats that are now “cross-platform, cross-border, and technologically fused”.

The report called for stronger regu­lation of AI-generated extremist con­tent, recognition of transnational on­line extremism in UK counter-terror­ism strategy, and increased protec­tion and support for Muslim commu­nities and places of worship.

Tell MAMA urged policymakers to “do more to address the risks of rac­ists and extremists using AI to spread fear, encourage violence and recruit”, alongside steps to improve social me­dia literacy skills in view of the rise in online disinformation targeting mi­nority communities.

The latest study centres on a far-right network called “Direct Action” that operated between September 2024 and early 2025, targeting British Muslim communities. The network offered £100 in cryptocurrency to an­yone who vandalised mosques with racist graffiti, successfully coordinat­ing attacks on mosques, community centres and a primary school in Lon­don and Manchester between Janu­ary and February 2025.

According to the study, social me­dia platforms were exploited, using purchased dormant accounts, bots, and hybrid “cyborg” accounts to cre­ate a false sense of domestic support. Researchers identified X (formerly Twitter) accounts created years earli­er that suddenly became active after the Southport stabbings in July 2024, spamming identical messages en­couraging violence.

While the true origins of the net­work remain unclear, investigators said they found “compelling” evi­dence of foreign involvement. The group’s main logo was copied direct from a defunct Russian hacktivist Tel­egram channel called “The Youth of the Saboteur”.

Messages contained tell-tale signs of non-native English speakers, in­cluding the pound symbol written backwards (2.500£ instead of £2,500), cities misspelt as “Glassgow” instead of Glasgow, and repetitive phrases like “discontented with politics and migrants” – phrasing more common in automated translation.

The network also used protest foot­age from Athens, Greece, filmed dur­ing 2011 anti-austerity demonstra­tions, falsely presenting it as UK unrest to encourage violence against police.

Researchers also identified AI-gen­erated images depicting Muslims as terrorists, fake footage of burning po­lice cars with misspelt text, and syn­thesised voices encouraging violence.

Videos combined real footage from the far-right riots following the South­port tragedy with AI-generated con­tent, falsely claiming the attacker was Muslim and using the sentiment to recruit supporters.

“Behind every online incitement post, propaganda video, and encrypt­ed message thread lies a real-world target: a mosque, a community cen­tre, a family, or an individual,” the re­port said. “The transition from online incitement to physical vandalism, ar­son threats, and terror tactics is not theoretical; it is already occurring.”

More For You

Palestine Action

Police officers detain a protester at 'Everyone Day', a mass vigil and sign-holding event in Trafalgar Square organised by Defend Our Juries to demand the lifting of the ban on Palestine Action, in London, April 11, 2026.

Reuters

Over 500 arrested at London protest backing Palestine Action

POLICE in London said more than 500 people were arrested at a pro-Palestinian protest in support of the banned group Palestine Action on Saturday.

The demonstration took place in Trafalgar Square, where protesters staged a sit-down protest. Officers carried away activists as others cheered and clapped.

Keep ReadingShow less