As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools-what we define here as algorithmic moderation systems-are increasingly being deployed to conduct content moderation at scale by major platforms for user-generated content such as Facebook, YouTube and Twitter. This article provides an accessible technical primer on how algorithmic moderation works; examines some of the existing automated tools used by major platforms to handle copyright infringement, terrorism and toxic speech; and identifies key political and ethical issues for these systems as the reliance on them grows. Recent events suggest that algorithmic moderation has become necessary to manage growing public expectations for increased platform responsibility, safety and security on the global stage; however, as we demonstrate, these systems remain opaque, unaccountable and poorly understood. Despite the potential promise of algorithms or 'AI', we show that even 'well optimized' moderation systems could exacerbate, rather than relieve, many existing problems with content policy as enacted by platforms for three main reasons: automated moderation threatens to (a) further increase opacity, making a famously non-transparent set of practices even more difficult to understand or audit, (b) further complicate outstanding issues of fairness and justice in large-scale sociotechnical systems and (c) re-obscure the fundamentally political nature of speech decisions being executed at scale.
As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools -what we define here as algorithmic moderation systems -are increasingly being deployed to conduct content moderation at scale by major platforms for user-generated content such as Facebook, YouTube and Twitter. This article provides an accessible technical primer on how algorithmic moderation works; examines some of the existing automated tools used by major platforms to handle copyright infringement, terrorism and toxic speech; and identifies key political and ethical issues for these systems as the reliance on them grows. Recent events suggest that algorithmic moderation has become necessary to manage growing public expectations for increased platform responsibility, safety and security on the global stage; however, as we demonstrate, these systems remain opaque, unaccountable and poorly understood. Despite the potential promise of algorithms or 'AI', we show that even 'well optimized' moderation systems could exacerbate, rather than relieve, many existing problems with content policy as enacted by platforms for three main reasons: automated moderation threatens to (a) further increase opacity, making a famously non-transparent set of practices even more difficult to understand or audit, (b) further complicate outstanding issues of fairness and justice in large-scale sociotechnical systems and (c) re-obscure the fundamentally political nature of speech decisions being executed at scale.
Algorithmic governance as a key concept in controversies around the emerging digital society highlights the idea that digital technologies produce social ordering in a specific way.Starting with the origins of the concept, this paper portrays different perspectives and objects of inquiry where algorithmic governance has gained prominence ranging from the public sector to labour management and ordering digital communication. Recurrent controversies across all sectors such as datafication and surveillance, bias, agency and transparency indicate that the concept of algorithmic governance allows to bring objects of inquiry and research fields that had not been related before into a joint conversation. Short case studies on predictive policy and automated content moderation show that algorithmic governance is multiple, contingent and contested. It takes different forms in different contexts and jurisdictions, and it is shaped by interests, power, and resistance.
Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. Terms of use: Documents in Christian Katzenbach and Kirsten GollatzAlexander von Humboldt Institute for Internet and Society, Germany AbstractFollowing recent theoretical contributions, this article suggests a new approach to finding the governance in Internet governance. Studies on Internet governance rely on contradictory notions of governance. The common understanding of governance as some form of deliberate steering or regulation clashes with equally common definitions of Internet governance as distributed modes of ordering. Drawing on controversies in the broader field of governance and regulation studies, we propose to resolve this conceptual conundrum by grounding governance in mundane activities of coordination. We define governance as reflexive coordination -focusing on those 'critical moments', when routine activities become problematic and need to be revised, thus, when regular coordination itself requires coordination. Regulation, in turn, can be understood as targeted public or private interventions aiming to influence the behaviour of others. With this distinction between governance and regulation, we offer a conceptual framework for empirical studies of doing Internet governance.
How to integrate artificial intelligence (AI) technologies in the functioning and structures of our society has become a concern of contemporary politics and public debates. In this paper, we investigate national AI strategies as a peculiar form of co-shaping this development, a hybrid of policy and discourse that offers imaginaries, allocates resources, and sets rules. Conceptually, the paper is informed by sociotechnical imaginaries, the sociology of expectations, myths, and the sublime. Empirically we analyze AI policy documents of four key players in the field, namely China, the United States, France, and Germany. The results show that the narrative construction of AI strategies is strikingly similar: they all establish AI as an inevitable and massively disrupting technological development by building on rhetorical devices such as a grand legacy and international competition. Having established this inevitable, yet uncertain, AI future, national leaders proclaim leadership intervention and articulate opportunities and distinct national pathways. While this narrative construction is quite uniform, the respective AI imaginaries are remarkably different, reflecting the vast cultural, political, and economic differences of the countries under study. As governments endow these imaginary pathways with massive resources and investments, they contribute to coproducing the installment of these futures and, thus, yield a performative lock-in function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.