We present a Bayesian analysis of the epistemology of analogue experiments with particular reference to Hawking radiation. First, we prove that such experiments can be confirmatory in Bayesian terms based upon appeal to 'universality arguments'. Second, we provide a formal model for the scaling behaviour of the confirmation measure for multiple distinct realisations of the analogue system and isolate a generic saturation feature. Finally, we demonstrate that different potential analogue realisations could provide different levels of confirmation. Our results provide a basis both to formalise the epistemic value of analogue experiments that have been conducted and to advise scientists as to the respective epistemic value of future analogue experiments. * email: Dardashti@uni-wuppertal.de †
In this paper we argue for the existence of analogue simulation as a novel form of scientific inference with the potential to be confirmatory. This notion is distinct from the modes of analogical reasoning detailed in the literature, and draws inspiration from fluid dynamical 'dumb hole' analogues to gravitational black holes. For that case, which is considered in detail, we defend the claim that the phenomena of gravitational Hawking radiation could be confirmed in the case that its counterpart is detected within experiments conducted on diverse realisations of the analogue model. A prospectus is given for further potential cases of analogue simulation in contemporary science.
Neural networks for semantic segmentation can be seen as statistical models that provide for each pixel of one image a probability distribution on predefined classes. The predicted class is then usually obtained by the maximum aposteriori probability (MAP) which is known as Bayes rule in decision theory. From decision theory we also know that the Bayes rule is optimal regarding the simple symmetric cost function. Therefore, it weights each type of confusion between two different classes equally, e.g., given images of urban street scenes there is no distinction in the cost function if the network confuses a person with a street or a building with a tree. Intuitively, there might be confusions of classes that are more important to avoid than others. In this work, we want to raise awareness of the possibility of explicitly defining confusion costs and the associated ethical difficulties if it comes down to providing numbers. We define two cost functions from different extreme perspectives, an egoistic and an altruistic one, and show how safety relevant quantities like precision / recall and (segment-wise) false positive / negative rate change when interpolating between MAP, egoistic and altruistic decision rules.
No-go theorems have played an important role in the development and assessment of scientific theories. They have stopped whole research programs and have given rise to strong ontological commitments. Given the importance they obviously have had in physics and philosophy of physics and the huge amount of literature on the consequences of specific no-go theorems, there has been relatively little attention to the more abstract assessment of no-go theorems as a tool in theory development. We will here provide this abstract assessment of no-go theorems and conclude that the methodological implications one may draw from no-go theorems are in disagreement with the implications that have often been drawn from them in the history of science.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.