Plastic pollution is a pervasive and growing problem. To estimate the effectiveness of interventions to reduce plastic pollution, we modeled stocks and flows of municipal solid waste and four sources of microplastics through the global plastic system for five scenarios between 2016 and 2040. Implementing all feasible interventions reduced plastic pollution by 40% from 2016 rates and 78% relative to ‘business as usual’ in 2040. Even with immediate and concerted action, 710 million metric tons of plastic waste cumulatively entered aquatic and terrestrial ecosystems. To avoid a massive build-up of plastic in the environment, coordinated global action is urgently needed to reduce plastic consumption, increase rates of reuse, waste collection and recycling, expand safe disposal systems and accelerate innovation in the plastic value chain.
Echo chambers (ECs) are enclosed epistemic circles where like-minded people communicate and reinforce pre-existing beliefs. It remains unclear if cognitive errors are necessarily required for ECs to emerge, and then how ECs are able to persist in networks with available contrary information. We show that ECs can theoretically emerge amongst error-free Bayesian agents, and that larger networks encourage rather than ameliorate EC growth. This suggests that the network structure itself contributes to echo chamber formation. While cognitive and social biases might exacerbate EC emergence, they are not necessary conditions. In line with this, we test stylized interventions to reduce EC formation, finding that system-wide truthful ‘educational’ broadcasts ameliorate the effect, but do not remove it entirely. Such interventions are shown to be more effective on agents newer to the network. Critically, this work serves as a formal argument for the responsibility of system architects in mitigating EC formation and retention.
There are many instances, both in professional domains such as law, forensics, and medicine, and in everyday life, where an effect (e.g. a piece of evidence or event) has multiple possible causes. In three experiments we demonstrate that individuals erroneously assume that evidence which is equally predicted by two competing hypotheses offers no support for either hypothesis. However, this assumption only holds in cases where competing causes are mutually exclusive and exhaustive (i.e. exactly one cause is true). We argue this reasoning error is due to a zero-sum perspective on evidence, wherein people assume that evidence which supports one causal hypothesis must disconfirm its competitor. Thus, evidence cannot give positive support to both competitors. Across three experiments (N = 49; N = 193; N = 201) we demonstrate this error is robust to intervention and generalizes across several different contexts. We also rule out several alternative explanations of the bias.
Misinformation has become an increasingly topical field of research. Studies on the 'Continued Influence Effect' (CIE) show that misinformation continues to influence reasoning despite subsequent retraction. Current explanatory theories of the CIE tacitly assume continued reliance on misinformation is the consequence of a biased process. In the present work, we show why this perspective may be erroneous. Using a Bayesian formalism, we conceptualize the CIE as a scenario involving contradictory testimonies and incorporate the previously overlooked factors of the temporal dependence (misinformation precedes its retraction) between, and the perceived reliability of, misinforming and retracting sources. When considering such factors, we show the CIE to have normative backing. We demonstrate that, on aggregate, lay reasoners (N = 101) intuitively endorse the necessary assumptions that demarcate CIE as a rational process, still exhibit the standard effect, and appropriately penalize the reliability of contradicting sources. Individuallevel analyses revealed that although many participants endorsed assumptions for a rational CIE, very few were able to execute the complex model update that the Bayesian model entails. In sum, we provide a novel illustration of the pervasive influence of misinformation as the consequence of a rational process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.