Prior social contagion models consider the spread of either one contagion on interdependent networks or multiple contagions on single layer networks, usually under assumptions of competition. We propose a new threshold model for the diffusion of multiple contagions. Individuals are placed on a multiplex network with a periodic lattice layer and a random-regular-graph layer. On these population structures, we study the interface between two key aspects of the diffusion process: the level of synergy between two contagions, and the rate at which individuals become dormant after adoption. Dormancy is defined as a looser form of immunity that limits active spreading but without conferring resistance. Monte Carlo simulations reveal lower synergy makes contagions more susceptible to percolation, especially those that diffuse on lattices. Faster diffusion of one contagion with dormancy probabilistically blocks the diffusion of the other, in a way similar to ring vaccination. We show that within a band of synergy, bimodal or trimodal branchings occur on the slower contagion on the lattice. We also show complimentary contagions can provide a synergistic boost to help spread contagions that have almost gone dormant.
Background The novel coronavirus, also known as SARS-CoV-2, has come to define much of our lives since the beginning of 2020. During this time, countries around the world imposed lockdowns and social distancing measures. The physical movements of people ground to a halt, while their online interactions increased as they turned to engaging with each other virtually. As the means of communication shifted online, information consumption also shifted online. Governing authorities and health agencies have intentionally shifted their focus to use social media and online platforms to spread factual and timely information. However, this has also opened the gate for misinformation, contributing to and accelerating the phenomenon of misinfodemics. Objective We carried out an analysis of Twitter discourse on over 1 billion tweets related to COVID-19 over a year to identify and investigate prevalent misinformation narratives and trends. We also aimed to describe the Twitter audience that is more susceptible to health-related misinformation and the network mechanisms driving misinfodemics. Methods We leveraged a data set that we collected and made public, which contained over 1 billion tweets related to COVID-19 between January 2020 and April 2021. We created a subset of this larger data set by isolating tweets that included URLs with domains that had been identified by Media Bias/Fact Check as being prone to questionable and misinformation content. By leveraging clustering and topic modeling techniques, we identified major narratives, including health misinformation and conspiracies, which were present within this subset of tweets. Results Our focus was on a subset of 12,689,165 tweets that we determined were representative of COVID-19 misinformation narratives in our full data set. When analyzing tweets that shared content from domains known to be questionable or that promoted misinformation, we found that a few key misinformation narratives emerged about hydroxychloroquine and alternative medicines, US officials and governing agencies, and COVID-19 prevention measures. We further analyzed the misinformation retweet network and found that users who shared both questionable and conspiracy-related content were clustered more closely in the network than others, supporting the hypothesis that echo chambers can contribute to the spread of health misinfodemics. Conclusions We presented a summary and analysis of the major misinformation discourse surrounding COVID-19 and those who promoted and engaged with it. While misinformation is not limited to social media platforms, we hope that our insights, particularly pertaining to health-related emergencies, will help pave the way for computational infodemiology to inform health surveillance and interventions.
Using more than 4 billion tweets and labels on more than 5 million users, this paper compares the behavior of humans and bots politically and semantically during the pandemic. Results reveal liberal bots are more central than humans in general, but less important than institutional humans as the elite circle grows smaller. Conservative bots are surprisingly absent when compared to prior work on political discourse, but are better than liberal bots at eliciting replies from humans, which suggest they may be perceived as human more frequently. In terms of topic and framing, conservative humans and bots disproportionately tweet about the Bill Gates and bio-weapons conspiracy, whereas the 5G conspiracy is bipartisan. Conservative humans selectively ignore mask-wearing and we observe prevalent out-group tweeting when discussing policy. We discuss and contrast how humans appear more centralized in health-related discourse as compared to political events, which suggests the importance of credibility and authenticity for public health in online information diffusion.
From fact-checking chatbots to community-maintained misinformation databases, Taiwan has emerged as a critical case-study for citizen participation in politics online. Due to Taiwan’s geopolitical history with China, the recent 2020 Taiwanese Presidential Election brought fierce levels of online engagement led by citizens from both sides of the strait. In this article, we study misinformation and digital participation on three platforms, namely Line, Twitter, and Taiwan’s Professional Technology Temple (PTT, Taiwan’s equivalent of Reddit). Each of these platforms presents a different facet of the elections. Results reveal that the greatest level of disagreement occurs in discussion about incumbent president Tsai. Chinese users demonstrate emergent coordination and selective discussion around topics like China, Hong Kong, and President Tsai, whereas topics like Covid-19 are avoided. We discover an imbalance of the political presence of Tsai on Twitter, which suggests partisan practices in disinformation regulation. The cases of Taiwan and China point toward a growing trend where regular citizens, enabled by new media, can both exacerbate and hinder the flow of misinformation. The study highlights an overlooked aspect of misinformation studies, beyond the veracity of information itself, that is the clash of ideologies, practices, and cultural history that matter to democratic ideals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.