Elon Musk-owned social media platform X (formerly Twitter) has agreed to strengthen protections against illegal hate speech and terrorist content in the UK. This move follows months of pressure from Britain’s media regulator, Ofcom. According to a Reuters report, under the commitments, X will review suspected illegal hate- and terrorism-related posts within an average of 24 hours and assess at least 85% of flagged content within 48 hours.
The platform has also agreed to restrict access in Britain to accounts operated by or linked to organisations banned under UK terrorism laws, submit quarterly performance data to Ofcom over the next year, and engage external experts to improve reporting systems after concerns that harmful content reports were not always clearly received or acted upon, the Reuters report noted.
The move comes as regulators in several countries increase scrutiny of online platforms over moderation practices and the spread of illegal or extremist content. X, which has regularly said it enforces bans on terrorist organisations and hateful content, did not immediately respond to a request for comment.
What commitments X has made to tackle hate speech and terrorist content
According to Ofcom, X’s new commitments include reviewing suspected illegal hate speech and terrorism-related posts more quickly and improving transparency around moderation efforts. The platform will:
- Review suspected illegal hate and terrorism-related posts within 24 hours on average
- Assess at least 85% of flagged content within 48 hours
- Restrict access in the UK to accounts linked to organisations banned under British terrorism laws
- Submit quarterly performance reports to Ofcom for the next year
- Work with external experts to improve user reporting systems
Ofcom said that civil society groups had raised concerns that reports of harmful content were not always acknowledged or addressed effectively.
“We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. This is of particular importance in the UK following a number of recent hate-motivated crimes suffered by the country's Jewish community,” said Oliver Griffiths, Ofcom’s online safety group director.
X to focus on removing antisemitic incidents for online content moderation
The deal comes amid rising concern about attacks on Jewish people and institutions in Britain. Recent incidents include a stabbing of two men in north London, which police are treating as a terrorist incident.
Imran Ahmed, chief executive of the Center for Countering Digital Hate, said the commitments followed
“sustained campaigning” after last year’s attack on Heaton Park Synagogue in northern England.
Danny Stone, chief executive of the Antisemitism Policy Trust, described the commitments as
“a good start” but said X was still
“failing in so many regards” to tackle racism.
Regulators in the European Union, Australia and Singapore have also pressured X over illegal or extremist content. The European Commission has opened a formal investigation into whether the platform is failing to curb hate speech.
The commitments in Britain come alongside wider scrutiny of X’s artificial intelligence tools. Earlier this year, Reuters reported that Grok, the company’s AI chatbot, generated sexualised images in some cases, even when users indicated subjects had not consented.
Ofcom said its investigation into X remains ongoing, including reviews of the platform’s systems for addressing illegal content and issues related to Grok.