Human rights groups have warned that the Online Safety Act and the ban on Palestine Action could lead to social media platforms censoring content related to Palestine.
Organizations including Open Rights Group and Index on Censorship have written to Ofcom, urging the regulator to clarify how platforms should distinguish between lawful political speech and content that could be seen as supporting terrorism.
They warn that without clear guidance, platforms may mistakenly flag support for Palestine as support for Palestine Action—the first direct action group banned under UK anti-terrorism laws in July. There’s also a risk that criticism of the ban itself could be wrongly classified as unlawful backing for the group.
Sara Chitseko from Open Rights Group said: “Vague laws are threatening vital discussions about Gaza, risking the removal of Palestinian-related content online. People may also start self-censoring out of fear that simply sharing or liking posts about Palestine could be seen as illegal. This undermines free speech and protest rights in the UK.”
Concerns have grown after Ofcom suggested platforms could avoid legal risks by being more restrictive than the law requires. The letter warns this could lead to automated moderation disproportionately silencing political speech, particularly from marginalized groups like Palestinian voices.
Unlike in the EU, the UK lacks an independent appeals process for users who believe their content was wrongly removed. The groups are calling on major platforms—including Meta, Alphabet (Google), X (Twitter), and ByteDance (TikTok)—to establish such a system if evidence emerges of lawful speech being suppressed.
The letter, backed by international organizations and academics, states: “The ban on Palestine Action may lead to more content removals, algorithmic suppression of pro-Palestine posts, and even criminalization for sharing non-violent activism. We’re also worried about how platforms interpret ‘support’ for banned groups.”
These concerns follow the Online Safety Act’s new age-verification rules for “adult” content, which some fear will restrict access to Palestine-related material. For example, UK Reddit users must now verify their age to access the subreddit r/israelexposed.
Ella Jakubowska of EDRi in Brussels warned that automated moderation systems—already known to disproportionately remove Palestinian, Black Lives Matter, and LGBTQ+ content—could unjustly censor critical voices globally. She noted this could violate the EU’s Digital Services Act, which aims to balance online safety with free expression.
Ofcom responded: “We’ve given platforms detailed guidance on identifying illegal and harmful content under the Act, including how to detect terrorist material while protecting freedom of speech.”The government is checking if content may have been posted by banned organizations.
“Companies aren’t required to restrict legal content for adult users. They must balance protecting free speech with keeping people safe,” the statement said.
Meta, Alphabet (Google’s parent company), X (formerly Twitter), and ByteDance (TikTok’s owner) were all asked for comment.