Pupil diameter and microsaccades are captured by an eye tracker and compared for their suitability as indicators of cognitive load (as beset by task difficulty). Specifically, two metrics are tested in response to task difficulty: (1) the change in pupil diameter with respect to inter- or intra-trial baseline, and (2) the rate and magnitude of microsaccades. Participants performed easy and difficult mental arithmetic tasks while fixating a central target. Inter-trial change in pupil diameter and microsaccade magnitude appear to adequately discriminate task difficulty, and hence cognitive load, if the implied causality can be assumed. This paper’s contribution corroborates previous work concerning microsaccade magnitude and extends this work by directly comparing microsaccade metrics to pupillometric measures. To our knowledge this is the first study to compare the reliability and sensitivity of task-evoked pupillary and microsaccadic measures of cognitive load.
No abstract
We demonstrate the use of the ambient/focal coecient K for studying the dynamics of visual behavior when performing cartographic tasks. Participants viewed a cartographic map and satellite image of Barcelona while performing a number of map-related tasks. Cartographic maps can be viewed as summary representations of reality, while satellite images are typically more veridical, and contain considerably more information. Our analysis of traditional eye movement metrics suggests that the satellite representation facilitates longer fixation durations, requiring greater scrutiny of the map. The cartographic map affords greater peripheral scanning, as evidenced by larger saccade amplitudes. Evaluation of K elucidates task dependence of ambient/focal attention dynamics when working with geographic visualizations: localization progresses from ambient to focal attention; route planning fluctuates in an ambient-focal.ambient pattern characteristic of the three stages of route end point localization, route following, and route confirmation.
Eye motions constitute an important part of our daily face-to-face interactions. Even subtle details in the eyes’ motions give us clues about a person’s thoughts and emotions. Believable and natural animation of the eyes is therefore crucial when creating appealing virtual characters. In this article, we investigate the perceived naturalness of detailed eye motions, more specifically of jitter of the eyeball rotation and pupil diameter on three virtual characters with differing levels of realism. Participants watched stimuli with six scaling factors from 0 to 1 in increments of 0.2, varying eye rotation and pupil size jitter individually, and they had to indicate if they would like to increase or decrease the level of jitter to make the animation look more natural. Based on participants’ responses, we determine the scaling factors for noise attenuation perceived as most natural for each character when using motion-captured eye motions. We compute the corresponding average jitter amplitudes for the eyeball rotation and pupil size to serve as guidelines for other characters. We find that the amplitudes perceived as most natural depend on the character, with our character with a medium level of realism requiring the largest scaling factors.
Eye movements are an essential part of non‐verbal behavior. Non‐player characters, as they occur in many games, communicate with the player through dialogue and non‐verbal behavior and can have a strong influence on player experience or even on gameplay. In this paper, we evaluate a procedural model designed to synthesize the subtleties of eye motion. More specifically, our model adds microsaccadic jitter and pupil unrest both modeled by 1/fα or pink noise to the saccadic main sequence. In a series of perceptual two‐alternative forced‐choice experiments, we explore the perceived naturalness of different parameters of pink noise by comparing synthesized motions to rendered motion of recorded eye movements at extreme close shot and close shot distances. Our results show that, on average, animations based on a procedural model with pink noise were perceived and evaluated as highly natural, whereas data‐driven motion without any jitter or with unfiltered jitter were consistently selected as the least natural in appearance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.