The rise of the alt-right as a potent and sometimes violent political force has been well documented. Yet the journey of an individual towards upholding these ideologies is less well understood. Alt-righters are not instantly converted, but rather incrementally nudged along a particular medial pathway. Drawing on video testimonies, chat logs, and other studies, this paper explores the interaction between this alt-right “pipeline” and the psyche of a user. It suggests three overlapping cognitive phases that occur within this journey: normalization, acclimation, and dehumanization. Finally, the article examines the individual who has reached the end of this journey, an extremist who nevertheless remains largely unregistered within traditional terrorist classifications.
As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.
Hate speech and toxic communication online is on the rise. Responses to this issue tend to offer technical (automated) or non-technical (human content moderation) solutions, or see hate speech as a natural product of hateful people. In contrast, this article begins by recognizing platforms as designed environments that support particular practices while discouraging others. In what ways might these design architectures be contributing to polarizing, impulsive, or antagonistic behaviors? Two platforms are examined: Facebook and YouTube. Based on engagement, Facebook's Feed drives views but also privileges incendiary content, setting up a stimulus-response loop that promotes outrage expression. YouTube's recommendation system is a key interface for content consumption, yet this same design has been criticized for leading users towards more extreme content. Across both platforms, design is central and influential, proving to be a productive lens for understanding toxic communication.
On 6 January 2021, a violent mob attacked the United States Capitol. Yet while mob suggests a chaotic and fragmented crowd, networked media had already been working to provide it with “just enough” cohesion, transforming it into a more dangerous political body. This article conceptualizes this preparatory media by examining the “free speech” social media network Parler, drawing on a corpus of ∼350,000 posts from the days leading up to and including the attack. This material empirically demonstrates how media worked to forge connections between disparate camps, to incite participants toward violent activity, and to legitimize this attack as moral or even spiritual. Preparatory media frames events, establishes targets, and sets agendas, providing a degree of order and working against disaggregation online. This temporary stabilization contributes to a more mobilized and organized public body. Rather than prosocial or emancipatory, the Capitol storming demonstrates the far darker potential of this work. Understanding this role of media and intervening within these logics provides one component for preventing future attacks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.