Online platforms should stop partnering with government agencies to remove content

Government involvement in content moderation raises serious human rights concerns in any context, and these concerns are even more troubling when the involvement comes from law enforcement. We recently filed a comment with the Meta Oversight Board urging them to take this issue seriously.

When sites cooperate with government agencies, it leaves the platform inherently biased in favor of positions favored by the government. This gives government entities outsized influence to manipulate content moderation systems for their own political purposes – to control public dialogue, suppress dissent, silence political opponents or blunt social movements. And once such systems are established, it’s easy for government – and especially law enforcement – to use the systems to coerce and pressure platforms to moderate the speeches they might not otherwise have chosen to moderate.

For example, Vietnam has boasted of being increasingly successful in getting Facebook posts deleted, but has been accused of target dissidents doing so. Similarly, Israel’s Cyber ​​Unit has boasted high compliance rates of up to 90 percent with his takedown requests on all social media platforms. But these demands unfairly targeted Palestinian rights activists, media outlets and civil society, and one such incident prompted Facebook’s oversight board to recommend that Facebook “formalize a transparent process for how it receives and responds to all government requests to remove content, and ensures they are included in transparency reports”.

Issues with government involvement in content moderation were addressed in the recently revised Santa Clara 2.0 Principles, in which EFF and other organizations called on social media companies to “recognize the particular risks for user rights that result from state involvement in content moderation processes”. The Santa Clara Principles also state that “State actors shall not exploit or manipulate companies’ content moderation systems to censor dissidents, political opponents, social movements, or any person.”

Specifically, users must be able to access:

  • Details of any rules or policies, whether they apply globally or in certain jurisdictions, which seek to reflect the requirements of local laws.
  • Details of any formal or informal working relationships and/or agreements the company has with state actors when it comes to reporting content or accounts or any other actions taken by the company.
  • Details of the process by which content or accounts reported by state actors are evaluated, whether based on company rules or policies or local laws.
  • Details of state requests to action items and accounts.

User access to this information is all the more relevant as social media sites have granted government authorities “trusted flagger” status. to notify the Platform of content that is illegal or violates its Community Guidelines or Terms of Service. This status has been granted to governments even when their own civil liberties record is questionable, allowing for the censorship of speech that challenges government-imposed narratives.

These concerns about government influence over the content available to online users are even more serious given that the EU’s Digital Services Act (DSA) will soon impose new mechanisms for platforms to designate government agencies– and eventually law enforcement agencies such as Europol-as trusted flaggers, giving governments priority status to “flag” platform content. Although trusted flaggers are only supposed to report illegal content, the DSA’s preamble encourages platforms to empower trusted flaggers to take action against content inconsistent with their terms of service. This opens the door to the overreach of law enforcement and the overreliance of platforms on law enforcement capabilities for content moderation purposes.

Additionally, government entities may simply lack the expertise to effectively flag content on a variety of platform types. This is evident in the UK where the Metropolitan Police Service in London, or the Met, constantly seeks to remove drill music from online platforms based on the mistaken and frankly racist belief that it is not the everything from creative expression, but from witness statement to criminal activity. In a world first For law enforcement, YouTube granted Met officers Trusted Flagger status in 2018 to “implement a more effective and efficient process for the removal of online content”. This ubiquitous system of content moderation on drill music is governed by the Met’s Project Alpha, which involves police officers from gang units mining a database, including drilling music videos, and monitoring social media sites for intelligence on criminal activity.

The Met has refuted accusations that Project Alpha suppresses free speech or violates the right to privacy. But reports show that since November 2016, the Met has made 579 referrals for removal of “potentially harmful content” from social media platforms and 522 of those have been removed, mostly from YouTube. A 2022 vice report also found that 1,006 rap videos have been included in the Project Alpha database since 2020, and a heavily redacted official Met document noted that the project had to realize “systematic tracking or large-scale profiling”, with men aged 15 to 21, the main target. Drill lyrics and music videos are not simple or immediate admissions of involvement in criminal activity, but law enforcement “street illiteracy” exacerbates the idea that drill music is an illustration of actual activities that the artists themselves have seen or done, rather than artistic expression communicated through culturally specific language and references that police officers are rarely equipped to decode or understand.

Law enforcement are not experts on music and are used to linking it to violence. As such, the flags raised by the police on social platforms are completely one-sided, rather than with experts backing both sides. And it’s particularly troubling that law enforcement is raising concerns about gang activity through its partnerships with social media platforms, which disproportionately target young people and communities of color.

Indeed, the removal of an exercise video clip at the behest of anonymous “UK law enforcement” is the very case the Oversight Council is considering and on which we have commented.

All individuals should be able to share content online without having their voice censored by government authorities because their views are opposed to those of the powerful. Users should be notified when government agencies have requested removal of their content, and companies should disclose any back-channel agreements they have with government actors.including trusted or preferred reporting systemsand reveal the specific government actors to whom these privileges and access are granted.

Ashley C. Reynolds