Throughout the COVID-19 pandemic, conspiracy theories about the virus spread rapidly, and whilst governments across the globe put in place different restrictions and guidelines to contain the pandemic, these were not universally adhered to. This research examined the association between pandemic related risk perceptions, belief in conspiracy theories, and compliance with COVID-19 public guidelines amongst a UK sample (n = 368). Participants rated their level of concern for a series of potential risks during the pandemic (to the economy, personal health, freedom, media integrity and health risk to others). Participants also rated their level of belief in different conspiracy theories and self-reported their behaviour during the first UK lockdown. Mediational analyses showed that stronger belief in conspiracy theories was associated with perceptions of lower risk to health and higher risk to the economy and freedom, which in turn were associated with lower compliance with COVID-19 related governmental guidelines. Perception of information transparency risks did not mediate the association between belief in conspiracy theories and compliant behaviours. These results highlight the key role that risk perception may play in translating belief in conspiracy theories into low compliance with governmental COVID-19 related guidelines. Our findings suggest new patterns with respect to the relationship between conspiracy theory adherence and salience of different risk perceptions amidst the pandemic, which could have implications for the development of public health messaging and communication interventions.
Previous research has documented the existence of both online echo chambers and hostile intergroup interactions. In this paper, we explore the relationship between these two phenomena by studying the activity of 5.97M Reddit users and 421M comments posted over 13 years. We examine whether users who are more engaged in echo chambers are more hostile when they comment on other communities. We then create a typology of relationships between political communities based on whether their users are toxic to each other, whether echo chamber-like engagement with these communities has a polarizing effect, and on the communities' political leanings. We observe both the echo chamber and hostile intergroup interaction phenomena, but neither holds universally across communities. Contrary to popular belief, we find that polarizing and toxic speech is more dominant between communities on the same, rather than opposing, sides of the political spectrum, especially on the left; however, this mostly points to the collective targeting of political outgroups.
Misalignment in diversity and inclusion (D&I) occurs when organisations make claims that they do not uphold through their practices. In the present study, we probed a leader oversight bias which may lead to ignorance of misalignment and, consequently, lower perceptions of organisational hypocrisy. Through an online experiment, we randomly allocated 198 participants to the role of either a leader or non-leader in a fictional organisation. T-tests showed that leaders perceived their organisation as less hypocritical than non-leaders. This effect was sequentially mediated by anticipated justice following exposure to a gender diversity statement, and perceived justice following exposure to organisational practices. Contrary to our predictions, gender did not moderate this effect. Our results demonstrate that leaders may perceive their organisations as more just and less hypocritical even when this may not be the case, because they attempt to confirm their prior, more favourable justice anticipations. This research calls for establishing better cross-hierarchical communication structures in organisations and for more effective management of leadership oversights.Future studies can examine this bias in field settings and isolate its driving mechanisms.
Previous work suggests that people's preference for different kinds of information depends on more than just accuracy. This could happen because the messages contained within different pieces of information may either be well-liked or repulsive. Whereas factual information must often convey uncomfortable truths, misinformation can have little regard for veracity and leverage psychological processes which increase its attractiveness and proliferation on social media. In this review, we argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation by reducing, rather than increasing, the psychological cost of doing so. We cover how attention may often be shifted away from accuracy and towards other goals, how social and individual cognition is affected by misinformation and the cases under which debunking it is most effective, and how the formation of online groups affects information consumption patterns, often leading to more polarization and radicalization. Throughout, we make the case that polarization and misinformation adherence are closely tied. We identify ways in which the psychological cost of adhering to misinformation can be increased when designing anti-misinformation interventions or resilient affordances, and we outline open research questions that the CSCW community can take up in further understanding this cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.