Eye blinking is one of the most frequent human actions. The control of blinking is thought to reflect complex interactions between maintaining clear and healthy vision and influences tied to central dopaminergic functions including cognitive states, psychological factors, and medical conditions. The most imminent consequence of blinking is a temporary loss of vision. Minimizing this loss of information is a prominent explanation for changes in blink rates and temporarily suppressed blinks, but quantifying this loss is difficult, as environmental regularities are usually complex and unknown. Here we used a controlled detection experiment with parametrically generated event statistics to investigate human blinking control. Subjects were able to learn environmental regularities and adapted their blinking behavior strategically to better detect future events. Crucially, our design enabled us to develop a computational model that allows quantifying the consequence of blinking in terms of task performance. The model formalizes ideas from active perception by describing blinking in terms of optimal control in trading off intrinsic costs for blink suppression with task-related costs for missing an event under perceptual uncertainty. Remarkably, this model not only is sufficient to reproduce key characteristics of the observed blinking behavior such as blink suppression and blink compensation but also predicts without further assumptions the well-known and diverse distributions of time intervals between blinks, for which an explanation has long been elusive.
The capability of directing gaze to relevant parts in the environment is crucial for our survival. Computational models have proposed quantitative accounts of human gaze selection in a range of visual search tasks. Initially, models suggested that gaze is directed to the locations in a visual scene at which some criterion such as the probability of target location, the reduction of uncertainty or the maximization of reward appear to be maximal. But subsequent studies established, that in some tasks humans instead direct their gaze to locations, such that after the single next look the criterion is expected to become maximal. However, in tasks going beyond a single action, the entire action sequence may determine future rewards thereby necessitating planning beyond a single next gaze shift. While previous empirical studies have suggested that human gaze sequences are planned, quantitative evidence for whether the human visual system is capable of finding optimal eye movement sequences according to probabilistic planning is missing. Here we employ a series of computational models to investigate whether humans are capable of looking ahead more than the next single eye movement. We found clear evidence that subjects’ behavior was better explained by the model of a planning observer compared to a myopic, greedy observer, which selects only a single saccade at a time. In particular, the location of our subjects’ first fixation differed depending on the stimulus and the time available for the search, which was well predicted quantitatively by a probabilistic planning model. Overall, our results are the first evidence that the human visual system’s gaze selection agrees with optimal planning under uncertainty.
A perioperative course of gabapentin produces a clinically insignificant improvement in analgesia after CD and is associated with a higher incidence of sedation.
During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with taskrelevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the wellknown scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.eye movements | computational modeling | decision making | learning | visual attention W e live in a dynamic and ever-changing environment. To avoid missing crucial events, we need to constantly use new sensory information to monitor our surroundings. However, the fraction of the visual environment that can be perceived at a given moment is limited by the placement of the eyes and the arrangement of the receptor cells within the eyes (1). Thus, continuously monitoring environmental locations, even when we know which regions in space contain relevant information, is unfeasible. Instead, we actively explore by targeting the visual apparatus toward regions of interest using proper movements of the eyes, head, and body (2-5). This constitutes a fundamental computational problem requiring humans to decide sequentially when to look where. Solving this problem arguably has been crucial to our survival from the early human hunters pursuing a herd of prey and avoiding predators to the modern human navigating a crowded sidewalk and crossing a busy road.So far, the emphasis of most studies on eye movements has been on spatial gaze selection. Its exquisite adaptability can be appreciated by considering the different factors influencing gaze, including low-level image features (6), scene gist (7) and scene semantics (8), task constraints (9), and extrinsic rewards (10); al...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.