Bayesian reasoning and decision making is widely considered normative because it minimizes prediction error in a coherent way. However, it is often difficult to apply Bayesian principles to complex real world problems, which typically have many unknowns and interconnected variables. Bayesian network modeling techniques make it possible to model such problems and obtain precise predictions about the causal impact that changing the value of one variable may have on the values of other variables connected to it. But Bayesian modeling is itself complex, and has until now remained largely inaccessible to lay people. In a large scale lab experiment, we provide proof of principle that a Bayesian network modeling tool, adapted to provide basic training and guidance on the modeling process to beginners without requiring knowledge of the mathematical machinery working behind the scenes, significantly helps lay people find normative Bayesian solutions to complex problems, compared to generic training on probabilistic reasoning. We discuss the implications of this finding for the use of Bayesian network software tools in applied contexts such as security, medical, forensic, economic or environmental decision making.
Causal judgements in explaining-away situations, where multiple independent causes compete to account for a common e↵ect, are ubiquitous in both everyday and specialised contexts. Despite their ubiquity, cognitive psychologists still struggle to understand how people reason in these contexts. Empirical studies have repeatedly found that people tend to 'insu ciently' explain away: that is, when one cause explains the presence of an e↵ect, people do not su ciently reduce the probability of other competing causes. However, the diverse accounts that researchers have proposed to explain this insu ciency suggest we are yet to find a compelling account of these results. In the current research we explored the novel possibility that insu ciency in explaining away is driven by: (i) some people interpreting probabilities as propensities, i.e. as tendencies of a physical system to produce an outcome and (ii) some people splitting the probability space among the causes in diagnostic reasoning, i.e. by following a strategy we call 'the diagnostic split'. We tested these two hypotheses by manipulating (a) the characteristics of cover stories to reflect di↵erent degrees to which the propensity interpretation of probability was pronounced, and (b) the prior probabilities of the causes which entailed di↵erent normative amounts of explaining away. Our results were in line with the extant literature as we found insu cient explaining away. However, we also found empirical support for our two hypotheses, suggesting that they are a driving force behind the reported insu ciency.
As AI systems come to permeate human society, there is an increasing need for such systems to explain their actions, conclusions, or decisions. This is presently fuelling a surge in interest in machine-generated explanation in the field of explainable AI. In this chapter, we examine work on explanations in areas ranging from AI to philosophy, psychology, and cognitive science. We point to different notions of explanation that are at play in these areas. We further discuss the theoretical work in philosophy and psychology on (good) explanation and its implications for the research on machine-generated explanations. Lastly, we consider the pragmatic nature of explanations and showcase its importance in the context of trust and fidelity. Throughout the chapter we suggest paths for further research on explanation in AI, psychology, philosophy, and cognitive science.
In this paper, we bring together two closely related, but distinct, notions: argument and explanation. We clarify their relationship. We then provide an integrative review of relevant research on these notions, drawn both from the cognitive science and the artificial intelligence (AI) literatures. We then use this material to identify key directions for future research, indicating areas where bringing together cognitive science and AI perspectives would be mutually beneficial. This article is part of a discussion meeting issue ‘Cognitive artificial intelligence’.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.