The sudden rise in the ability of machine learning methodology, such as deep neural networks, to identify and predict with great accuracy instances of malignant cell growth from radiological images has led prominent developers of this technology, such as Geoffrey Hinton, to hold the view that “[…] we should stop training radiologists.” Similar views exist in other contexts regarding the replacement of humans with artificial intelligence (AI) technologies. The assumption in these kinds of views is that deep neural networks are better than human radiologists in that they are more accurate, less costly, and have more predictive power than their human counterparts. In this paper, I argue that these considerations, even if true, are simply inadequate as reasons for us to allocate the kind of trust suggested by Hinton and others to these sorts of artifacts. In particular, I show that if the same considerations were true of something other than an AI device, say a pigeon, we would not have sufficient reason to trust them in the same way as suggested of deep neural networks in a medical setting. If this is the case then these considerations are also insufficient to trust AI enough to replace radiologists. Furthermore, I argue that the reliability of AI methodologies such as deep neural networks—which are at the center of this argument—is something that has not yet been established, and doing so faces fundamental challenges. Because of these challenges, it is not possible to ascribe the level of reliability expected from the deployment of a medical device. So, not only are the reasons cited in favor of the deployment of AI technologies in medical settings not sufficient/adequate even if they are true, but knowing whether they are true or not faces non‐trivial epistemic challenges. If this is so, then we have no good reasons to advocate replacing radiologists with AI methodologies such as deep neural networks.
We address some of the epistemological challenges highlighted by the Critical Data Studies literature by reference to some of the key debates in the philosophy of science concerning computational modeling and simulation. We provide a brief overview of these debates focusing particularly on what Paul Humphreys calls epistemic opacity. We argue that debates in Critical Data Studies and philosophy of science have neglected the problem of error management and error detection. This is an especially important feature of the epistemology of Big Data. In “Error” section we explain the main characteristics of error detection and correction along with the relationship between error and path complexity in software. In this section we provide an overview of conventional statistical methods for error detection and review their limitations when faced with the high degree of conditionality inherent to modern software systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.