Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner's beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in 2 studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go "beyond the information given" and use the available data to make inferences on the (unobserved) causal rather than on the (observed) data level.
Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.
A large body of research has explored how the time between two events affects judgments of causal strength between them. In this article, we extend this work in 4 experiments that explore the role of temporal information in causal structure induction with multiple variables. We distinguish two qualitatively different types of information: The order in which events occur, and the temporal intervals between those events. We focus on one-shot learning in Experiment 1. In Experiment 2, we explore how people integrate evidence from multiple observations of the same causal device. Participants' judgments are well predicted by a Bayesian model that rules out causal structures that are inconsistent with the observed temporal order, and favors structures that imply similar intervals between causally connected components. In Experiments 3 and 4, we look more closely at participants' sensitivity to exact event timings. Participants see three events that always occur in the same order, but the variability and correlation between the timings of the events is either more consistent with a chain or a fork structure. We show, for the first time, that even when order cues do not differentiate, people can still make accurate causal structure judgments on the basis of interval variability alone. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Causal queries about singular cases, which inquire whether specific events were causally connected, are prevalent in daily life and important in professional disciplines such as the law, medicine, or engineering. Because causal links cannot be directly observed, singular causation judgments require an assessment of whether a co‐occurrence of two events c and e was causal or simply coincidental. How can this decision be made? Building on previous work by Cheng and Novick (2005) and Stephan and Waldmann (2018), we propose a computational model that combines information about the causal strengths of the potential causes with information about their temporal relations to derive answers to singular causation queries. The relative causal strengths of the potential cause factors are relevant because weak causes are more likely to fail to generate effects than strong causes. But even a strong cause factor does not necessarily need to be causal in a singular case because it could have been preempted by an alternative cause. We here show how information about causal strength and about two different temporal parameters, the potential causes' onset times and their causal latencies, can be formalized and integrated into a computational account of singular causation. Four experiments are presented in which we tested the validity of the model. The results showed that people integrate the different types of information as predicted by the new model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.