Much has been made of the importance of the speed at which disinformation diffuses through online social media and this speed is an important aspect to consider when designing interventions. An additional complexity is that there can be different types of false information that travel from and through different communities who respond in various ways within the same social media conversation. Here we present a case study/example analysis exploring the speed and reach of three different types of false stories found in the Black Panther movie Twitter conversation and comparing the diffusion of these stories with the community responses to them. We find that the negative reaction to fake stories of racially-motivated violence whether in the form of debunking quotes or satirical posts can spread at speeds that are magnitudes higher than the original fake stories. Satire posts, while less viral than debunking quotes, appear to have longer lifetimes in the conversation. We also found that the majority of mixed community members who originally spread fake stories switched to attacking them. Our work serves as an example of the importance of analyzing the diffusion of both different types of disinformation and the different responses to it within the same overall conversation.
Automated ways to extract stance (denying vs. supporting opinions) from conversations on social media are essential to advance opinion mining research. Recently, there is a renewed excitement in the field as we see new models attempting to improve the state-of-the-art. However, for training and evaluating the models, the datasets used are often small. Additionally, these small datasets have uneven class distributions, i.e., only a tiny fraction of the examples in the dataset have favoring or denying stances, and most other examples have no clear stance. Moreover, the existing datasets do not distinguish between the different types of conversations on social media (e.g., replying vs. quoting on Twitter). Because of this, models trained on one event do not generalize to other events. In the presented work, we create a new dataset by labeling stance in responses to posts on Twitter (both replies and quotes) on controversial issues. To the best of our knowledge, this is currently the largest human-labeled stance dataset for Twitter conversations with over 5200 stance labels. More importantly, we designed a tweet collection methodology that favours the selection of denial-type responses. This class is expected to be more useful in the identification of rumours and determining antagonistic relationships between users. Moreover, we include many baseline models for learning the stance in conversations and compare the performance of various models. We show that combining data from replies and quotes decreases the accuracy of models indicating that the two modalities behave differently when it comes to stance learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.