2016
DOI: 10.1080/13875868.2016.1226838
|View full text |Cite
|
Sign up to set email alerts
|

Gaze behavior during incidental and intentional navigation in an outdoor environment

Abstract: Previous research on landmark selection and route learning derived many of its conclusions from the analysis of memory tasks and verbal route descriptions. We examined the extent to which these findings are reflected in gaze behavior. Wearing a mobile eye tracking device, participants learned the first part of a real-world route incidentally and the second part intentionally. When compared with incidental learning, intentional learning led to a stronger focus on landmarks at structurally salient locations. In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
34
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(38 citation statements)
references
References 28 publications
2
34
0
Order By: Relevance
“…We measured learning at every level of spatial knowledge (i.e., landmark, route, and survey), in addition to testing the participants' awareness of details in their environment via a non-spatial memory test. We chose tasks that have been shown to reflect landmark knowledge (i.e., a landmark recognition memory task; Wenczel, Hepperle, & von Stülpnagel, 2017), route knowledge (i.e., drawing the guided route on an outline of the building; Labate, Pazzaglia, & Hegarty, 2014), and survey knowledge (i.e., a verbal pointing task, indicating angular direction between landmarks; Rand, Creem-Regehr, & Thompson, 2015). We also chose tasks that simultaneously tapped into multiple levels of spatial knowledge, including filling in a building outline with the name and location of learned landmarks and navigating a novel shortcut between two landmarks (Labate et al, 2014).…”
Section: Current Studymentioning
confidence: 99%
See 1 more Smart Citation
“…We measured learning at every level of spatial knowledge (i.e., landmark, route, and survey), in addition to testing the participants' awareness of details in their environment via a non-spatial memory test. We chose tasks that have been shown to reflect landmark knowledge (i.e., a landmark recognition memory task; Wenczel, Hepperle, & von Stülpnagel, 2017), route knowledge (i.e., drawing the guided route on an outline of the building; Labate, Pazzaglia, & Hegarty, 2014), and survey knowledge (i.e., a verbal pointing task, indicating angular direction between landmarks; Rand, Creem-Regehr, & Thompson, 2015). We also chose tasks that simultaneously tapped into multiple levels of spatial knowledge, including filling in a building outline with the name and location of learned landmarks and navigating a novel shortcut between two landmarks (Labate et al, 2014).…”
Section: Current Studymentioning
confidence: 99%
“…We also chose tasks that simultaneously tapped into multiple levels of spatial knowledge, including filling in a building outline with the name and location of learned landmarks and navigating a novel shortcut between two landmarks (Labate et al, 2014). The non-spatial memory task was designed to assess participants' ability to maintain awareness of their environment by testing their recognition of incidental landmarks (i.e., van Asselen et al, 2006;Wenczel et al, 2017).…”
Section: Current Studymentioning
confidence: 99%
“…Therefore, researchers are now executing a growing number of experiments in real-world environments. Multiple eye tracking experiments exist that analyze pedestrians' [Brügger et al 2018;Davoudian and Raynham 2012;Fotios et al 2015a,b;Kiefer et al 2014Kiefer et al , 2012Wenczel et al 2017] or cyclists' behavior [Mantuano et al 2016;Schmidt and von Stülp-nagel 2018;Vansteenkiste et al 2014]. In contrast to existing works, we are presenting a visual analytics-based approach to explore patterns, extract common eye movement strategies, and enable a combined analysis of the multi-modal data.…”
Section: Related Workmentioning
confidence: 99%
“…Whereas researchers have analyzed the visual perception of car drivers in many eye tracking experiments [Kapitaniak et al 2015], there are only few experiments performed with pedestrians and cyclists. In recent years, researchers have published a growing number of eye tracking experiments analyzing pedestrians and cyclists (e. g., [Davoudian and Raynham 2012;Kiefer et al 2012;Mantuano et al 2016;Vansteenkiste et al 2014;Wenczel et al 2017]). In this paper, we apply a visual analyticsbased method to analyze gaze behavior of pedestrians and cyclists in a real-world eye tracking experiment.…”
Section: Introductionmentioning
confidence: 99%
“…Because walking on roads requires considerable visual information and many attention switches, eye-tracking technology is applicable for navigation [37] and road-related [34,38] research. Hepperle and von Stülpnagel [39] compared gaze behaviour during intentional and incidental route learning and retrieval and found that the main difference pertained to the objects that the participants did not view. Liao et al [40] used eye movement data to infer pedestrians' navigation tasks from five possible tasks and obtained a total classification accuracy of 67%.…”
Section: Introductionmentioning
confidence: 99%