This is the unspecified version of the paper.This version of the publication may differ from the final published version. Quantum Probability 1 Permanent repository link:Abstract A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector spaces defined by features, and similarities between vectors to determine probability judgments. On the other hand, quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of (von Neumann) axioms that relax some of the classic (Kolmogorov) axioms. The quantum model is compared and contrasted with other competing explanations for these judgment errors including the anchoring and adjustment model for probability judgments. The quantum model introduces a new fundamental concept to cognition --the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments.We conclude that quantum information processing principles provide a viable and promising new way to understand human judgment and reasoning. Quantum Probability 2Over 30 years ago, Kahneman and Tversky (1982) began their influential program of research to discover the heuristics and biases that form the basis of human probability judgments.Since that time, a great deal of new and challenging empirical phenomena have been discovered including conjunction and disjunction fallacies, unpacking effects, and order effects on inference (Gilovich, Griffin, & Kahneman, 2002). Although heuristic concepts (such as representativeness, availability, anchor-adjustment) initially served as a guide to researchers in this area, there is a growing need to move beyond these intuitions, and develop more coherent, comprehensive, and deductive theoretical explanations (Shah & Oppenheimer, 2008). The purpose of this article is to propose a new way of understanding human probability judgment using quantum probability principles (Gudder, 1988).At first, it might seem odd to apply quantum theory to human judgments. Before we address this general issue, we point out that we are not claiming the brain to be a quantum computer; rather we only use quantum principles to derive cognitive models and leave the neural basis for later research. That is, we use the mathematical principles of quantum probability detached from the physical meaning associated with quantum mechanics. This approach is similar to the application of complexity theory or stochastic processes to domains outside of physics. 1There are at least five reasons for doing so: (1) judgment is not a simple read out from a pre-existing or recorded state, instead it is constructed from the question and the cognitive state created by the current context; from this first point it then follows that (2) drawing a conclus...
Context effects occur when a choice between 2 options is altered by adding a 3rd alternative. Three major context effects--similarity, compromise, and attraction--have wide-ranging implications across applied and theoretical domains, and have driven the development of new dynamic models of multiattribute and multialternative choice. We propose the multiattribute linear ballistic accumulator (MLBA), a new dynamic model that provides a quantitative account of all 3 context effects. Our account applies not only to traditional paradigms involving choices among hedonic stimuli, but also to recent demonstrations of context effects with nonhedonic stimuli. Because of its computational tractability, the MLBA model is more easily applied than previous dynamic models. We show that the model also accounts for a range of other phenomena in multiattribute, multialternative choice, including time pressure effects, and that it makes a new prediction about the relationship between deliberation time and the magnitude of the similarity effect, which we confirm experimentally.
Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non-decision time. Inferences about these psychological factors hinge upon the validity of the models' parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two-condition data sets, we manipulated properties of participants' behavior in a two-alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler's degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.
In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., and others observing null effects (e.g., Tinghög et al., 2013;Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of −0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation.
Order of information plays a crucial role in the process of updating beliefs across time. In fact, the presence of order effects makes a classical or Bayesian approach to inference difficult. As a result, the existing models of inference, such as the belief-adjustment model, merely provide an ad hoc explanation for these effects. We postulate a quantum inference model for order effects based on the axiomatic principles of quantum probability theory. The quantum inference model explains order effects by transforming a state vector with different sequences of operators for different orderings of information. We demonstrate this process by fitting the quantum model to data collected in a medical diagnostic task and a jury decision-making task. To further test the quantum inference model, a new jury decision-making experiment is developed. Using the results of this experiment, we compare the quantum inference model with two versions of the belief-adjustment model, the adding model and the averaging model. We show that both the quantum model and the adding model provide good fits to the data. To distinguish the quantum model from the adding model, we develop a new experiment involving extreme evidence. The results from this new experiment suggest that the adding model faces limitations when accounting for tasks involving extreme evidence, whereas the quantum inference model does not. Ultimately, we argue that the quantum model provides a more coherent account for order effects that was not possible before.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.