1999
DOI: 10.1068/p2935
|View full text |Cite
|
Sign up to set email alerts
|

The Roles of Vision and Eye Movements in the Control of Activities of Daily Living

Abstract: The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

34
844
2
8

Year Published

2009
2009
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 925 publications
(888 citation statements)
references
References 26 publications
34
844
2
8
Order By: Relevance
“…Here again all 1058 trials without missing data are included. When performing everyday tasks our eyes are usually directed at the object or objects that are relevant for what we are doing at that moment Johansson et al 2001;Land et al 1999;Land and Hayhoe 2001;Triesch et al 2003), or toward positions at which critical information is expected to become available (e.g., information about how a ball bounces; Land and Furneaux 1997;Land and McLeod 2000). We could therefore tentatively conclude from Fig.…”
Section: Eye Movementsmentioning
confidence: 99%
“…Here again all 1058 trials without missing data are included. When performing everyday tasks our eyes are usually directed at the object or objects that are relevant for what we are doing at that moment Johansson et al 2001;Land et al 1999;Land and Hayhoe 2001;Triesch et al 2003), or toward positions at which critical information is expected to become available (e.g., information about how a ball bounces; Land and Furneaux 1997;Land and McLeod 2000). We could therefore tentatively conclude from Fig.…”
Section: Eye Movementsmentioning
confidence: 99%
“…In spite of the predominance of brief durations of fixations in prehension movements, it has been shown that they do support movement control. Several studies have shown that visual information necessary for movement control can be computed within a single fixation (Ballard et al 1995;Land et al 1999). This indicates quite efficient visual processing of some easy-tocompute visual features required for online arm movement control.…”
Section: Fixation Durations At the Obstaclementioning
confidence: 99%
“…For the remaining 91.7 % of the task, gaze and arm movements are synchronously driven to the same goal (to the obstacle during the first segment of movement, and toward the target after the obstacle is passed). Land et al (1999) observed in their teamaking experiment that the gaze and arm movements are highly coupled during execution of each subtask, but when it comes to a transition toward a new target, the gaze switches approximately 0.5 s before the movement of the arm to the previous object is completed. Johansson et al (2001) found that the difference between the gaze exit times and arm exit times was quite tight when executing sequential tasks, but the gaze starts moving toward the new target slightly before the hand does (∼100-200 ms), as well.…”
Section: Gaze and Arm Exit Times From The Obstaclementioning
confidence: 99%
See 1 more Smart Citation
“…During relatively slow sequences [e.g. making tea; see Land and Hayhoe (2001) for review], gaze typically lands on the target on average 560 ms prior to any hand movement, thus providing sufficient time to process and combine retinal and extra-retinal information for movement planning (Land et al 1999). However, in more temporally demanding sequence tasks such as typing a text message on a mobile phone or picking up and moving objects on a fastmoving production line, the visuo-motor system would be challenged by feedback delays (up to 165 ms for visual processing; e.g.…”
Section: Introductionmentioning
confidence: 99%