Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an example from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis.
There is great interest in relating individual differences in cognitive processing to activation of neural systems. The general process involves relating measures of task performance like reaction times or accuracy to brain activity to identify individual differences in neural processing. One limitation of this approach is that measures like reaction times can be affected by multiple components of processing. For instance, some individuals might have higher accuracy in a memory task because they respond more cautiously, not because they have better memory. Computational models of decision making, like the drift–diffusion model and the linear ballistic accumulator model, provide a potential solution to this problem. They can be fitted to data from individual participants to disentangle the effects of the different processes driving behavior. In this sense the models can provide cleaner measures of the processes of interest, and enhance our understanding of how neural activity varies across individuals or populations. The advantages of this model-based approach to investigating individual differences in neural activity are discussed with recent examples of how this method can improve our understanding of the brain–behavior relationship.
In three experiments, we sought to understand when and why people use an algorithm decision aid. Distinct from recent approaches, we explicitly enumerate the algorithm’s accuracy while also providing summary feedback and training that allowed participants to assess their own skills. Our results highlight that such direct performance comparisons between the algorithm and the individual encourages a strategy of selective reliance on the decision aid; individuals ignored the algorithm when the task was easier and relied on the algorithm when the task was harder. Our systematic investigation of summary feedback, training experience, and strategy hint manipulations shows that further opportunities to learn about the algorithm encourage not only increased reliance on the algorithm but also engagement in experimentation and verification of its recommendations. Together, our findings emphasize the decision-maker’s capacity to learn about the algorithm providing insights for how we can improve the use of decision aids.
Interruptions are an inevitable, and often negative, part of everyday life that increase both errors and the time needed to complete even menial tasks. However, existing research suggests that being given time to prepare for a pending interruption-a lag time-can mitigate some of the interruption costs. To understand better why interruption lags are effective, we present a series of three experiments in which we develop and test a novel sequential decision-making paradigm, the mazing race. We find that interruption lags were only beneficial when participants had a clear strategy for how to complete the task, allowing them to avoid specific errors. In the final experiment, we attempted to use what we learned about the kinds of errors introduced by interruptions to develop a feedback-based intervention, aimed at dealing with situations in which interruption lags are not possible. We found that feedback was, only in certain situations, an effective replacement for an interruption lag. Overall, however, because the usefulness of interruption lags depend on the specific strategy a participant adopts, developing generic interventions to replace interruption lags is likely to be difficult. Public Significance StatementWe argue that having time to prepare for an impending interruption (i.e., an "interruption lag") can reduce its negative impact, particularly when one has a clear strategy for what needs to be remembered during that time. We also conclude that it is not trivial to replace these so-called interruption lags with information that might have been forgotten during an interruption, because individuals use this time in a variety of ways, remembering whatever information they believe is necessary for their specific way of solving a problem.
Interruptions are an inevitable occurrence in health care. Interruptions in diagnostic decision-making are no exception and can have negative consequences on both the decision-making process and well-being of the decision-maker. This may result in inaccurate or delayed diagnoses. To date, research specific to interruptions on diagnostic decision-making has been limited, but strategies to help manage the negative impacts of interruptions need to be developed and implemented. In this perspective, we first present a modified model of interruptions to visualize the interruption process and illustrate where potential interventions can be implemented. We then consider several empirically tested strategies from the fields of health care and cognitive psychology that can lay the groundwork for additional research to mitigate effects of interruptions during diagnostic decision-making. We highlight strategies to minimize the negative impacts of interruptions as well as strategies to prevent interruptions altogether. Additionally, we build upon these strategies to propose specific research priorities within the field of diagnostic safety. Identifying effective interventions to help clinicians better manage interruptions has the potential to minimize diagnostic errors and improve patient outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.