In this article, I explore the compatibility of inference to the best explanation (IBE) with several influential models and accounts of scientific explanation. First, I explore the different conceptions of IBE and limit my discussion to two: the heuristic conception and the objective Bayesian conception. Next, I discuss five models of scientific explanation with regard to each model's compatibility with IBE. I argue that Philip Kitcher's unificationist account supports IBE; Peter Railton's deductivenomological-probabilistic model, Wesley Salmon's statistical-relevance Model, and Bas van Fraassen's erotetic account are incompatible with IBE; and Wesley Salmon's causal-mechanical model is merely consistent with IBE. In short, many influential models of scientific explanation do not support IBE. I end by outlining three possible conclusions to draw: (1) either philosophers of science or defenders of IBE have seriously misconstrued the concept of explanation, (2) philosophers of science and defenders of IBE do not use the term 'explanation' univocally, and (3) the ampliative conception of IBE, which is compatible with any model of scientific explanation, deserves a closer look.
In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them less explainable. In this reply, it is argued that Erasmus et al. left out one influential account of explanation from their discussion: the unificationist model. It is argued that, on the unificationist model, the features that make something an explanation is sensitive to complexity. Therefore, on the unificationist model, ANNs (and other Machine Learning models) are not explainable.It is emphasized that Erasmus et al.'s general strategy is correct. The literature on explainable Artificial Intelligence can benefit by drawing from philosophical accounts of explanation. However, philosophical accounts of explanation do not settle the problem of whether ANNs are explainable because they do not unanimously declare that explanation is invariant with regard to complexity.
Several metaphysical naturalists argue that the success of science, together with the claim that scientists adhere to methodological naturalism, amounts to strong evidence for metaphysical naturalism. I call this the scientific-success argument. It is argued that the scientific-success argument is similar to the no-miracles argument for realism in philosophy of science. On the no-miracles argument, the success of science is taken as strong evidence that scientific theories are (approximately) true. Based on this similarity, some considerations relevant to one argument may also be relevant to the other. One particular consideration is explored. The selectionist response to the no-miracles argument states that on an evolutionary model of science, in which scientific theories are accepted only after surviving a rigorous selection process, the no-miracles argument fails. The selectionist response also applies to the scientific-success argument. If scientific theories are selected for success, we do not need to explain the success of science by appealing to metaphysical naturalism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.