Human perceptual decisions are often described as optimal. Critics of this view have argued that claims of optimality are overly flexible and lack explanatory power. Meanwhile, advocates for optimality have countered that such criticisms single out a few selected papers. To elucidate the issue of optimality in perceptual decision making, we review the extensive literature on suboptimal performance in perceptual tasks. We discuss eight different classes of suboptimal perceptual decisions, including improper placement, maintenance, and adjustment of perceptual criteria, inadequate tradeoff between speed and accuracy, inappropriate confidence ratings, misweightings in cue combination, and findings related to various perceptual illusions and biases. In addition, we discuss conceptual shortcomings of a focus on optimality, such as definitional difficulties and the limited value of optimality claims in and of themselves. We therefore advocate that the field drop its emphasis on whether observed behavior is optimal and instead concentrate on building and testing detailed observer models that explain behavior across a wide range of tasks. To facilitate this transition, we compile the proposed hypotheses regarding the origins of suboptimal perceptual decisions reviewed here. We argue that verifying, rejecting, and expanding these explanations for suboptimal behavior - rather than assessing optimality per se - should be among the major goals of the science of perceptual decision making.
Understanding how people rate their confidence is critical for characterizing a wide range of perceptual, memory, motor, and cognitive processes. To enable the continued exploration of these processes, we created a large database of confidence studies spanning a broad set of paradigms, participant populations, and fields of study. The data from each study are structured in a common,
Perceptual decisions are better when they take uncertainty into account. Uncertainty arises not only from the properties of sensory input but also from cognitive sources, such as different levels of attention. However, it is unknown whether humans appropriately adjust for such cognitive sources of uncertainty during perceptual decision-making. Here we show that, in a task in which uncertainty is relevant for performance, human categorization and confidence decisions take into account uncertainty related to attention. We manipulated uncertainty in an orientation categorization task from trial to trial using only an attentional cue. The categorization task was designed to disambiguate decision rules that did or did not depend on attention. Using formal model comparison to evaluate decision behavior, we found that category and confidence decision boundaries shifted as a function of attention in an approximately Bayesian fashion. This means that the observer’s attentional state on each trial contributed probabilistically to the decision computation. This responsiveness of an observer’s decisions to attention-dependent uncertainty should improve perceptual decisions in natural vision, in which attention is unevenly distributed across a scene.
Sensory signals continuously enter the brain, raising the question of how perceptual systems handle this constant flow of input. Attention to an anticipated point in time can prioritize visual information at that time. However, how we voluntarily attend across time when there are successive task-relevant stimuli has been barely investigated. We developed a novel experimental protocol that allowed us to assess, for the first time, both the benefits and costs of voluntary temporal attention when perceiving a short sequence of two or three visual targets with predictable timing. We found that when humans directed attention to a cued point in time, their ability to perceive orientation was better at that time but also worse earlier and later. These perceptual tradeoffs across time are analogous to those found across space for spatial attention. We conclude that voluntary attention is limited, and selective, across time.
The magnocellular (M) and parvocellular (P) subdivisions of primate LGN are known to process complementary types of visual stimulus information, but a method for noninvasively defining these subdivisions in humans has proven elusive. As a result, the functional roles of these subdivisions in humans have not been investigated physiologically. To functionally map the M and P subdivisions of human LGN, we used high-resolution fMRI at high field (7T and 3T) together with a combination of spatial, temporal, luminance, and chromatic stimulus manipulations. We found that stimulus factors that differentially drive magnocellular and parvocellular neurons in primate LGN also elicit differential BOLD fMRI responses in human LGN and that these responses exhibit a spatial organization consistent with the known anatomical organization of the M and P subdivisions. In test-retest studies, the relative responses of individual voxels to M-type and P-type stimuli were reliable across scanning sessions on separate days and across sessions at different field strengths. The ability to functionally identify magnocellular and parvocellular regions of human LGN with fMRI opens possibilities for investigating the functions of these subdivisions in human visual perception, in patient populations with suspected abnormalities in one of these subdivisions, and in visual cortical processing streams arising from parallel thalamocortical pathways.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.