Much is known about the effects of reward and punishment on behavior, yet little research has considered how these incentives influence the information processing dynamics that underlie decision making. We fit the linear ballistic accumulator to data from a perceptual judgment task to examine the impacts of reward- and punishment-based incentives on three distinct components of information processing: the quality of the information processed, the quantity of that information, and the decision threshold. The threat of punishment lowered the average quality and quantity of information processed compared to the prospect of reward or no performance incentive at all. The threat of punishment also induced less cautious decision making by lowering people’s decision thresholds relative to the prospect of reward. These findings suggest that information processing dynamics are not wholly determined by objective properties of the decision environment, but also by the higher order goals of the system.
According to existing theories of simple decision-making, decisions are initiated by continuously sampling and accumulating perceptual evidence until a threshold value has been reached. Many models, such as the diffusion decision model, assume a noisy accumulation process, described mathematically as a stochastic Wiener process with Gaussian distributed noise. Recently, an alternative account of decision-making has been proposed in the Lévy Flights (LF) model, in which accumulation noise is characterized by a heavy-tailed power-law distribution, controlled by a parameter, α. The LF model produces sudden large “jumps” in evidence ac- cumulation that are not produced by the standard Wiener diffusion model, which some have argued provide better fits to data. It remains unclear, however, whether jumps in evidence accumulation have any real psychological meaning. Here, we investigate the conjecture by Voss et al. (2019) that jumps might reflect sudden shifts in the source of evidence people rely on to make decisions. We reason that if jumps are psychologically real, we should observe systematic reductions in jumps as people become more practiced with a task (i.e., as people converge on a stable decision strategy with experience). We fitted four versions of the LF model to behavioral data from a study by Evans and Brown (2017), using a five-layer deep inference neural network for parameter estimation. The analysis revealed systematic reductions in jumps as a function of practice, such that the LF model more closely approximated the standard Wiener model over time. This trend could not be attributed to other sources of parameter variability, speaking against the possibility of trade-offs with other model parameters. Our analysis suggests that jumps in the LF model might be capturing strategy instability exhibited by relatively inexpe- rienced observers early on in task performance. We conclude that further investigation of a potential psychological interpretation of jumps in evidence accumulation is warranted.
We examine the extent to which perceptual decision-making processes differ as a function of the time in the academic term in which the participant enrolls in the experiment and whether the participant is an undergraduate who completes the experiment for course credit, a paid participant who completes the experiment in the lab, or a paid participant recruited via Amazon Mechanical Turk who completes the experiment online. In Study 1, we conducted a survey to examine cognitive psychologists' expectations regarding the quality of data obtained from these different groups of participants. We find that cognitive psychologists expect performance and response caution to be lowest among undergraduate participants who enroll at the end of the academic term, and highest among paid in-lab participants. Studies 2 and 3 tested these expectations using two common perceptual decision-making paradigms. Overall, we found little evidence for systematic time-of-term effects among undergraduate participants. The different participant groups responded to standard stimulus quality and speed/accuracy emphasis manipulations in similar ways. Among participants recruited via Mechanical Turk, the effect of speed/accuracy emphasis on response caution was strongest. This group also showed poorer discrimination performance than the other groups in a motion discrimination task, but not in a brightness discrimination task. We conclude that online crowdsourcing platforms can provide high quality perceptual decision-making data, but give recommendations for how data quality can be maximized when using these platforms for recruitment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.