We live in an era of increasing worry that internet platforms like Facebook or Twitter, which mediate our online speech, are also fomenting hatred, spreading misinformation, and distorting political outcomes. The 2016 US presidential election, in particular, unleashed a torrent of concern about platform-borne harms. Policymakers around the world have called for laws requiring platforms to do more to combat illegal and even merely "harmful" content.From the perspective of platforms themselves, these proposals have a lot in common. Regardless of their substantive mandatesto address content that is misleading, hateful, or violent, for examplethey all require similar operational processes to comply. Platforms already have these processes in place to enforce current laws and their discretionary Community Guidelines. Any new efforts to regulate content online will likely build on existing systems, personnel, and toolsand inherit both their strengths and their weaknesses. That makes it important to understand those systems.Reliable information about platforms' content-removal systems was, for many years, hard to come by; but data and disclosures are steadily emerging as researchers focus on the topic and platforms ramp up their transparency efforts. This chapter reviews the current and likely future sources of information.Some content takedowns are required by law, while others are performed voluntarily. Legal takedowns are shaped by intermediary liability laws, which tell platforms what responsibility they have for unlawful content posted by
The Court of Justice of the European Union’s (CJEU) 2019 ruling in Glawischnig-Piesczek v Facebook Ireland** addresses courts’ powers to issue injunctions requiring internet hosting platforms to proactively monitor content posted by their users. It answers important questions about limitations on such injunctions under the eCommerce Directive (Directive 2000/31/EC). But, as this Opinion explains, it leaves some much thornier questions unresolved.
Glawischnig-Piesczek holds that courts may, consistent with Art. 15 of the eCommerce Directive, require platforms to monitor for and remove specific content. Monitoring orders may not, however, require platforms to carry out an ‘independent assessment’ of the content. The ruling does not closely examine what kinds of injunctions or filtering technologies are permissible, nor does it explore fundamental rights considerations when courts are asked to order platforms to monitor their users. This Opinion lays out the case’s technological, legal, and policy backdrop, and identifies important questions it leaves open for Member State courts. In particular, the Opinion suggests that Glawischnig-Piesczek’s limitation on ‘independent assessment’ will make it difficult for courts to devise injunctions that simultaneously follow the CJEU’s guidance under the eCommerce Directive and meet the requirements of fundamental rights. It lists key fundamental rights considerations for future cases involving potential monitoring injunctions, including procedural considerations in cases affecting the rights of absent third parties.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.