2012
DOI: 10.1167/12.13.19
|View full text |Cite
|
Sign up to set email alerts
|

The role of uncertainty and reward on eye movements in a virtual driving task

Abstract: Eye movements during natural tasks are well coordinated with ongoing task demands and many variables could influence gaze strategies. Sprague and Ballard (2003) proposed a gaze-scheduling model that uses a utility-weighted uncertainty metric to prioritize fixations on task-relevant objects and predicted that human gaze should be influenced by both reward structure and task-relevant uncertainties. To test this conjecture, we tracked the eye movements of participants in a simulated driving task where uncertain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
59
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(68 citation statements)
references
References 75 publications
6
59
0
Order By: Relevance
“…Therefore, we cannot eliminate secondary learning and task difficulty as factors influencing primary strategy recruitment. Indeed, uncertainty has been shown to modify the use of performance strategies (Derusso et al 2010;Sullivan et al 2012). However, it appears that uncertainty, learning phase, and/or task difficulty on the secondary choice are not principal factors driving the effects on primary strategy observed here.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, we cannot eliminate secondary learning and task difficulty as factors influencing primary strategy recruitment. Indeed, uncertainty has been shown to modify the use of performance strategies (Derusso et al 2010;Sullivan et al 2012). However, it appears that uncertainty, learning phase, and/or task difficulty on the secondary choice are not principal factors driving the effects on primary strategy observed here.…”
Section: Discussionmentioning
confidence: 99%
“…This reward uncertainty protocol was developed by Sprague et al [22] to simulate behavior in a walking environment and was shown to be superior to the common round-robin protocol used in robotics. Evidence that gaze allocation in a dynamic, noisy environment is in fact controlled by reduction of visual uncertainty weighted by subjective reward value was obtained by Sullivan et al [36]. This study tracked eye movements of participants in a simulated driving task where uncertainty and implicit reward (via task priority) were varied.…”
Section: Expected Reward As a Module's Fixation Protocolmentioning
confidence: 97%
“…The data from Sullivan et al [36] was then modeled by Johnson et al [37] using an adaptation of the reward uncertainty protocol to estimate the best reward and noise values for each task directly from the gaze data. Because optimal behavior for driving consisted of staying at a fixed setpoints in distance and speed, they were able to use the simplification of a servo controller for choosing the action, an approximation that works well in this instance.…”
Section: Expected Reward As a Module's Fixation Protocolmentioning
confidence: 99%
“…Uncertainty, event expectancies, internal task state estimates (Johnson et al, 2014;Wickens, et al, 2001;Senders et al, 1967), and saliency, as well as the expected effort and value (reward) of gathering visual information from a particular source (Sullivan et al, 2012;Wickens, et al, 2001) can certainly play a role in driver multitasking behavior and may provide alternative venues or improve the current model for explaining the empirical findings. The current model is based on the theory of threaded cognition in multitasking by Salvucci and Taatgen (2008) and does not require modeling of uncertainty or internal task state estimates (for now at least) or an explicit central executive process in multitasking situations; instead, a straightforward "threading" mechanism suffices to interleave the resource processing between tasks.…”
Section: Limitations and Further Researchmentioning
confidence: 99%