KUALA LUMPUR – Facebook and Instagram’s parent company, Meta, has come under fire for its content moderation policies that increasingly silence voices in support of Palestine since the war started on October 7.
Human Rights Watch has documented a pattern of undue removal and suppression of protected speech, including peaceful expression in support of Palestine and public debate about Palestinian human rights.
The 51-page report titled “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook” also highlighted problems stemming from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.
“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at HRW.
“Social media is an essential platform for people to bear witness and speak out against abuses, while Meta’s censorship is furthering the erasure of Palestinians’ suffering.
“Instead of tired apologies and empty promises, Meta should demonstrate that it is serious about addressing Palestine-related censorship once and for all by taking concrete steps towards transparency and remediation,” Brown said.
According to HRW, it reviewed 1,050 cases of online censorship from more than 60 countries. Though they are not necessarily a representative analysis of censorship, the cases are consistent with years of reporting and advocacy by Palestinian, regional, and international human rights organisations detailing Meta’s censorship of content supporting Palestinians.
HRW identified six key patterns of censorship, each recurring in at least 100 instances: content removals, suspension or deletion of accounts, inability to engage with content, inability to follow or tag accounts, restrictions on the use of features such as Instagram/Facebook Live, and “shadow banning” – a term denoting a significant decrease in the visibility of an individual’s posts, stories, or account without notification.
In more than 300 cases, users were unable to appeal content or account removal because the appeal mechanism malfunctioned, leaving them with no effective access to a remedy.
In hundreds of the cases documented, Meta invoked its “Dangerous Organisations and Individuals” (DOI) policy, which fully incorporates the US designated lists of “terrorist organisations”.
HRW said Meta also misapplied its policies on violent and graphic content, violence and incitement, hate speech, and nudity and sexual activity, as well as inconsistently applied its “newsworthy allowance” policy, removing dozens of pieces of content documenting Palestinian injury and death that have news value.
In 2022, in response to the investigation’s recommendations as well as guidance from Meta’s Oversight Board, Meta made a commitment to make a series of changes to its policies and their enforcement in content moderation.
Almost two years later, Meta has not carried out its commitments, and the company has failed to meet its human rights responsibilities, HRW found.
HRW also shared its findings with Meta and solicited Meta’s perspective. In response, Meta cited its human rights responsibility and core human rights principles as guiding its “immediate crisis response measures” since October 7.
However, HRW said Meta should align its content moderation policies and practices with international human rights standards, ensuring that decisions to take content down are transparent, consistent, and not overly broad or biased.
The group has suggested that Meta begin by overhauling its “dangerous organisations and individuals” policy to make it consistent with international human rights standards and should audit its “newsworthy allowance” policy to ensure that it does not remove content that is in the public interest and should ensure its equitable and non-discriminatory application. – December 21, 2023