Human operators supervising multiple uninhabited air and ground vehicles (UAVs and UGVs) under high task load must be supported appropriately in context by automation. Two experiments examined the efficacy of such adaptive automation in a simulated high workload reconnaissance mission involving four subtasks: (a) UAV target identification; (b) UGV route planning; (c) communications, with embedded verbal situation awareness probes; and (d) change detection. The results of the first "baseline" experiment established the sensitivity of a change detection procedure to transient and nontransient events in a complex, multi-window, dynamic display. Experiment 1 also set appropriate levels of low and high task load for use in Experiment 2, in which three automation conditions were compared: manual; static automation, in which an automated target recognition (ATR) system was provided for the UAV task; and adaptive automation, in which individual operator change detection performance was assessed in real time and used to invoke the ATR if and only if change detection accuracy was below a threshold. Change detection accuracy and situation awareness were higher and workload was lower for both automation conditions compared to manual performance. In addition, these beneficial effects on change detection and workload were significantly greater for adaptive compared to static automation. The results point to the efficacy of adaptive automation for supporting the human operator tasked with supervision of multiple uninhabited vehicles under high workload conditions.
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Research suggests that humans and autonomous agents can be more effective when working together as a combined unit rather than as individual entities. However, most research has focused on autonomous agent design characteristics while ignoring the importance of social interactions and team dynamics. Two experiments examined how the perception of teamwork among human–human and human–autonomous agents and the application of team building interventions could enhance teamwork outcomes. Participants collaborated with either a human or an autonomous agent. In the first experiment, it was revealed that manipulating team structure by considering your human and autonomous partner as a teammate rather than a tool can increase affect and behavior, but does not benefit performance. In the second experiment, participants completed goal setting and role clarification (team building) with their teammate prior to task performance. Team building interventions led to significant improvements for all teamwork outcomes, including performance. Across both studies, participants communicated more substantially with human partners than they did with autonomous partners. Taken together, these findings suggest that social interactions between humans and autonomous teammates should be an important design consideration and that particular attention should be given to team building interventions to improve affect, behavior, and performance.
Three experiments examined the vigilance performance of participants watching videos depicting intentional actions of an individual's hand reaching for and grasping an object--involving transporting or using either a gun or a hairdryer--in order to detect infrequent threat-related actions. Participants indicated detection of target actions either manually or by withholding response. They also rated their subjective mental workload before and after each vigilance task. Irrespective of response mode, the detection rate of intentional threats declined over time on task and subjective workload increased, but only under visually degraded viewing conditions. This vigilance decrement was attenuated by temporal cues that were 75% valid in predicting a subsequent target action and eliminated with 100% valid cues. The findings indicate that detection of biological motion targets, and threat-related intentional actions in particular, although not attention sensitive under normal viewing conditions, is subject to vigilance decrement under degraded viewing conditions. The results are compatible with the view that the decrement in detecting threat-related intentional actions reflects increasing failure of attention allocation processes over time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.