2013
DOI: 10.1145/2536764.2536767
|View full text |Cite
|
Sign up to set email alerts
|

Models of gaze control for manipulation tasks

Abstract: Human studies have shown that gaze shifts are mostly driven by the current task demands. In manipulation tasks, gaze leads action to the next manipulation target. One explanation is that fixations gather information about task relevant properties, where task relevance is signalled by reward. This work presents new computational models of gaze shifting, where the agent imagines ahead in time the informational effects of possible gaze fixations. Building on our previous work, the contributions of this article ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…The models of Reichle and Laurent (2006) and Lewis et al (2013) are particularly related to ours in that they optimize policies for rewards that explicitly trade off economy with accuracy (on word identification, Reichle & Laurent, 2006, or lexical decision, Lewis et al, 2013. Beyond language, models of visual behavior based on reinforcement learning have been proposed in other domains (e.g., Acharya, Chen, Myers, Lewis, & Howes, 2017;Butko & Movellan, 2008;Hayhoe & Ballard, 2014;Nuñez-Varela & Wyatt, 2013;Sprague, Ballard, & Robinson, 2007), using both policy-gradient methods like in our model (Butko & Movellan, 2008) and Q-Learning algorithms (Acharya et al, 2017;Nuñez-Varela & Wyatt, 2013;Sprague et al, 2007).…”
Section: Relation To Other Models Of Readingmentioning
confidence: 99%
“…The models of Reichle and Laurent (2006) and Lewis et al (2013) are particularly related to ours in that they optimize policies for rewards that explicitly trade off economy with accuracy (on word identification, Reichle & Laurent, 2006, or lexical decision, Lewis et al, 2013. Beyond language, models of visual behavior based on reinforcement learning have been proposed in other domains (e.g., Acharya, Chen, Myers, Lewis, & Howes, 2017;Butko & Movellan, 2008;Hayhoe & Ballard, 2014;Nuñez-Varela & Wyatt, 2013;Sprague, Ballard, & Robinson, 2007), using both policy-gradient methods like in our model (Butko & Movellan, 2008) and Q-Learning algorithms (Acharya et al, 2017;Nuñez-Varela & Wyatt, 2013;Sprague et al, 2007).…”
Section: Relation To Other Models Of Readingmentioning
confidence: 99%
“…Computational rational agents have been used to model a number of phenomena in HCI (Payne & Howes, 2013). Applications relevant to this paper include menu interaction (Chen et al, 2015), visual search (Hayhoe & Ballard, 2014;Myers, Lewis, & Howes, 2013;Nunez-Varela & Wyatt, 2013;Tseng & Howes, 2015), and decision-making (Chen, Starke, Baber, & Howes, 2017).…”
Section: Example 2: Computational Rationalitymentioning
confidence: 99%
“…Computational rational agents have been used to model a number of phenomena in HCI [36]. Applications relevant to this paper include menu interaction [13] and visual search [20,30,34,44].…”
Section: Introduction To Computational Rationalitymentioning
confidence: 99%