An algorithm was developed to characterize, compare, and analyze eye movement sequences that occur during visual tracking of multiple moving targets. When individuals perform a task requiring interrogating multiple moving targets, complex and long eye movement sequences occur, making sequence comparisons difficult in whole and in part. The developed algorithm characterizes a sequence by hierarchically clustering the targets that an individual interrogated through an unordered transition matrix created from the frequencies of eye fixation transitions among the targets. Then, the resulting sets of clustered targets, which we define as multilevel visual groupings (VGs), can be compared with analyze performance. The algorithm was applied to an aircraft conflict detection task. Eye movement data were collected from 25 expert air traffic controllers and 40 novices. The task was to detect air traffic conflicts for easy, moderate, and hard difficulty scenarios on simulated radar display. Experts' and novices' multilevel (level one composed of pairs, and level two composed of three or four targets) VGs were aggregated and visualized. Chisquare tests confirmed that there were significant differences for easy (level one: p < 0.001, level two: p = 0.004), moderate (level two: p = 0.047), and hard (level two: p < 0.001) difficulty scenarios. The algorithm supported identifying different eye movement characteristics between experts and novices. Scans of the experts had multilevel VGs around the conflict pairs, whereas those of the novices included different aircraft. The results show promise for using the compact representation of eye movements for performance analysis.
Implementing experts' scanpaths into novices' active learning process shows promise in enhancing training effectiveness and reducing training time.
Physiological indicators, including eye tracking measures, may provide insight into human decision making and cognition in many domains, including weather forecasting. Situation awareness (SA), a critical component of forecast decision making, is commonly conceptualized as the degree to which information is perceived, understood, and projected into a future context. Drawing upon recent applications of eye tracking in the study of forecaster decision making, we investigate the relationship among eye movement measures, automation, and SA assessed through a freeze probe assessment method. In addition, we explore the relationship between an automated forecasting decision aid use and information seeking behavior.In this study, a sample of professional weather forecasters completed a series of tasks, informed by a set of forecasting decision aids, and with variable access to an experimental automated tool, while an eye tracking system captured data related to eye movements and information usage. At the end of each forecasting task, participants responded to a set of questions related to the environmental situation in the framework of a surveybased assessment technique in order to assess their level of situation awareness. Regression analysis revealed a moderate relationship between the SA measure and eye tracking metrics, supporting the hypothesis that eye tracking may have utility in assessing SA. The results support the use of eye tracking in the assessment of specific and measurable attributes of the decision-making process in weather forecasting. The findings are discussed in light of potential benefits that eye tracking could bring to human performance assessment as well as decisionmaking research in the forecasting domain.
When aircraft are not aligned into orderly streams, air traffic controllers (ATCs) will likely need to develop visual scanning strategies to enhance their conflict detection performance given their limited perceptual and cognitive resources. In this work, visual scanning, aircraft selection, and aircraft comparison are investigated. Twenty-five active professional ATCs detected conflicts in a simulated enroute environment. After the trials, the ATCs documented their visual search and conflict detection strategies. Analysis of the written information shows that the visual scanning methods can be classified into six categories (circular, linear, augmented, regional, density-based, and proximity-based). The aircraft selection methods fall into three categories (select aircraft that are at same altitude, at same altitude and converging, and at same altitude and in close proximity). The aircraft comparison methods fall into five categories (attend to altitude changes, speed (or speed differences), speed and angle/bearing, overtake, and projection). The proposed integrated process incorporates the categorizations by accommodating the visual scanning strategies into the overall process.
Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.