Making a decision is invariably accompanied by a sense of confidence in that decision. There is widespread variability in the exact level of confidence, even for tasks that do not differ in objective difficulty. Such expressions of under- and overconfidence are of vital importance, as they are related to fundamental life outcomes. Yet, a clear account regarding the computational mechanisms underlying under- and overconfidence is currently missing. In the current work, we propose that prior beliefs in the ability to perform a task can explain why confidence can differ dramatically despite similar task performance. In two experiments, we provide evidence for this hypothesis by showing that manipulating prior beliefs about task performance in an induction phase causally influences reported levels of confidence in a testing phase, while leaving actual performance unaffected. This is true both when prior beliefs are influenced via manipulated feedback and by manipulating the difficulty of the task during the training phase. These results are accounted for within an accumulation-to-bound model by assuming an additional parameter controlling prior beliefs about task performance. Our results provide a fundamental mechanistic insight into the computations underlying over- and underconfidence.
Making a decision and reporting your confidence in the accuracy of that decision are thought to reflect a similar mechanism: the accumulation of evidence. Previous research has shown that choices and reaction times are well accounted for by a computational model assuming noisy accumulation of evidence until crossing a decision boundary (e.g., the drift diffusion model). Decision confidence can be derived from the amount of evidence following post-decision evidence accumulation. Currently, the stopping rule for post-decision evidence accumulation is underspecified. Inspired by recent neurophysiological evidence, we introduce additional confidence boundaries that determine the termination of post-decision evidence accumulation. If this conjecture is correct, it implies that confidence judgments should be subject to the same strategic considerations as the choice itself, i.e. a tradeoff between speed and accuracy. To test this prediction, we instructed participants to make fast or accurate decisions, and to give fast or carefully considered confidence judgments. Results show that our evidence accumulation model with additional confidence boundaries successfully captured the speed-accuracy tradeoffs seen in both decisions and confidence judgments. Most importantly, instructing participants to make fast versus accurate decisions influenced the decision boundaries, whereas instructing participants to make fast versus careful confidence judgments influenced the confidence boundaries. Our data show that the stopping rule for confidence judgments can be well understood within the context of evidence accumulation models, and that the computation of decision confidence is under strategic control.
According to the dual mechanisms of control (DMC), reactive and proactive control are involved in adjusting behaviors when maladapted to the environment. However, both contextual and inter-individual factors increase the weight of one control mechanism over the other, by influencing their cognitive costs. According to one of the DMC postulates, limited reactive control capacities should be counterbalanced by greater proactive control to ensure control efficiency. Moreover, as the flexible weighting between reactive and proactive control is key for adaptive behaviors, we expected that maladaptive behaviors, such as risk-taking, would be characterized by an absence of such counterbalance. However, to our knowledge, no studies have yet investigated this postulate. In the current study, we analyzed the performances of 176 participants on two reaction time tasks (Simon and Stop Signal tasks) and a risk-taking assessment (Balloon Analog Risk Taking, BART). The post-error slowing in the Simon task was used to reflect the spontaneous individuals’ tendency to proactively adjust behaviors after an error. The Stop Signal Reaction Time was used to assess reactive inhibition capacities and the duration of the button press in the BART was used as an index of risk-taking propensity. Results showed that poorer reactive inhibition capacities predicted greater proactive adjustments after an error. Furthermore, the higher the risk-taking propensity, the less reactive inhibition capacities predicted proactive behavioral adjustments. The reported results suggest that higher risk-taking is associated with a smaller weighting of proactive control in response to limited reactive inhibition capacities. These findings highlight the importance of considering the imbalanced weighting of reactive and proactive control in the analysis of risk-taking, and in a broader sense, maladaptive behaviors.
Human decision making is accompanied by a sense of confidence. According to Bayesian decision theory, confidence reflects the learned probability of making a correct response, given available data (e.g., accumulated stimulus evidence and response time). Although optimal, independently learning these probabilities for all possible combinations of data is computationally intractable. Here, we describe a novel model of confidence implementing a low-dimensional approximation of this optimal yet intractable solution. Using a low number of free parameters, this model allows efficient estimation of confidence, while at the same time accounting for idiosyncrasies, different kinds of biases and deviation from the optimal probability correct. Our model dissociates confidence biases resulting from individuals' estimate of the reliability of evidence (captured by parameter α), from confidence biases resulting from general stimulus-independent under- and overconfidence (captured by parameter β). We provide empirical evidence that this model accurately fits both choice data (accuracy, response time) and trial-by-trial confidence ratings simultaneously. Finally, we test and empirically validate two novel predictions of the model, namely that (1) changes in confidence can be independent of performance and (2) selectively manipulating each parameter of our model leads to distinct patterns of confidence judgments. As the first tractable and flexible account of the computation of confidence, our model provides concrete tools to construct computationally more plausible models, and offers a clear framework to interpret and further resolve different forms of confidence biases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.