Background During the time surrounding the approval and initial distribution of Pfizer-BioNTech’s COVID-19 vaccine, large numbers of social media users took to using their platforms to voice opinions on the vaccine. They formed pro- and anti-vaccination groups toward the purpose of influencing behaviors to vaccinate or not to vaccinate. The methods of persuasion and manipulation for convincing audiences online can be characterized under a framework for social-cyber maneuvers known as the BEND maneuvers. Previous studies have been conducted on the spread of COVID-19 vaccine disinformation. However, these previous studies lacked comparative analyses over time on both community stances and the competing techniques of manipulating both the narrative and network structure to persuade target audiences. Objective This study aimed to understand community response to vaccination by dividing Twitter data from the initial Pfizer-BioNTech COVID-19 vaccine rollout into pro-vaccine and anti-vaccine stances, identifying key actors and groups, and evaluating how the different communities use social-cyber maneuvers, or BEND maneuvers, to influence their target audiences and the network as a whole. Methods COVID-19 Twitter vaccine data were collected using the Twitter application programming interface (API) for 1-week periods before, during, and 6 weeks after the initial Pfizer-BioNTech rollout (December 2020 to January 2021). Bot identifications and linguistic cues were derived for users and tweets, respectively, to use as metrics for evaluating social-cyber maneuvers. Organization Risk Analyzer (ORA)-PRO software was then used to separate the vaccine data into pro-vaccine and anti-vaccine communities and to facilitate identification of key actors, groups, and BEND maneuvers for a comparative analysis between each community and the entire network. Results Both the pro-vaccine and anti-vaccine communities used combinations of the 16 BEND maneuvers to persuade their target audiences of their particular stances. Our analysis showed how each side attempted to build its own community while simultaneously narrowing and neglecting the opposing community. Pro-vaccine users primarily used positive maneuvers such as excite and explain messages to encourage vaccination and backed leaders within their group. In contrast, anti-vaccine users relied on negative maneuvers to dismay and distort messages with narratives on side effects and death and attempted to neutralize the effectiveness of the leaders within the pro-vaccine community. Furthermore, nuking through platform policies showed to be effective in reducing the size of the anti-vaccine online community and the quantity of anti-vaccine messages. Conclusions Social media continues to be a domain for manipulating beliefs and ideas. These conversations can ultimately lead to real-world actions such as to vaccinate or not to vaccinate against COVID-19. Moreover, social media policies should be further explored as an effective means for curbing disinformation and misinformation online.
Online talk about racism has been salient throughout the COVID-19 pandemic. Yet while such social media conversations reflect existing tensions in the offline world, the same discourse has also become a target for information operations aiming to heighten social divisions. This article examines Twitter discussions of racism in the first and sixth months since COVID-19 was accorded pandemic status by the World Health Organization and uncovers dynamic associations with bot activity and hate speech. Humans initially constituted the most hateful accounts in online conversations about racism in March, but in August, bots dominated hate speech. Over time, greater bot activity likewise amplified levels of hate speech a week later. Moreover, while discourse about racism in March primarily featured an organic focus on racial identities like Asian and Chinese, we further observed a bot-dominated focus in August toward political identities like president, Democrat, and Republican. Although hate speech targeting Asian groups remained present among racism discussions in August, these findings suggest a bot-fueled redirection from focusing on racial groups at the onset of the pandemic to targeting politics closer to the 2020 US elections. This work enhances understanding of the complexity of racism discussions during the pandemic, its vulnerability to manipulation through information operations, and the large-scale quantitative study of inorganic hate campaigns in online social networks.
Democracies around the world face the threat of manipulation of their electorates via coordinated online influence campaigns. Researchers have responded by developing valuable methods for finding automated accounts and identifying false information, but these valiant efforts often fall into a cat-and-mouse game with perpetrators who constantly change their behavior. This has forced several researchers to go beyond the detection of individual malicious actors by instead identifying the coordinated activity that propels potent information operations. In this vein, we provide rigorous quantitative evidence for the notion that sudden increases in Twitter account creations may provide early warnings of online information operations. Analysis of fourteen months of tweets discussing the 2020 U.S. elections revealed that accounts created during bursts exhibited more similar behavior, showed more agreement on mail-in voting and mask wearing, and were more likely to be bots and share links to low-credibility sites. In concert with other techniques for detecting nefarious activity, social media platforms could temporarily limit the influence of accounts created during these bursts. Given the advantages of combining multiple anti-misinformation methods, we join others in presenting a case for the need to develop more integrable methods for countering online influence campaigns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.