Some recent developments in Artificial Intelligence-especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts-have led to a number of calls for "transparency." This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst "outward" forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call "functional" transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it.
In this article, we describe a hitherto undocumented fallacy—in the sense of a mistake in reasoning—constituted by a negativity bias in the way that people attribute motives to others. We call this the “worst-motive fallacy,” and we conducted two experiments to investigate it. In Experiment 1 ( N = 323), participants expected protagonists in a variety of fictional vignettes to pursue courses of action that satisfy the protagonists’ worst motive, and furthermore, participants significantly expected the protagonist to pursue a worse course of action than they would prefer themselves. Experiment 2 ( N = 967) was a preregistered attempted replication of Experiment 1, including a bigger range of vignettes; the first effect was not replicated for the new vignettes tested but was for the original set. Also, we once again found that participants expected protagonists to be more likely than they were themselves to pursue courses of action that they considered morally bad. We discuss the worst-motive fallacy’s relation to other well-known biases as well as its possible evolutionary origins and its ethical (and meta-ethical) consequences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.