Human–agent teaming (HAT) is becoming more commonplace across industry, military, and consumer settings. Agents are becoming more advanced, more integrated, and more responsible for tasks previously assigned to humans. In addition, the dyadic human–agent teaming nature is evolving from a one–one pair to one–many, in which the human is working with numerous agents to accomplish a task. As capabilities become more advanced and humanlike, the best method for humans and agents to effectively coordinate is still unknown. Therefore, current research must start diverting focus from how many agents can a human manage to how can agents and humans work together effectively. Levels of autonomy (LOAs), or varying levels of responsibility given to the agents, implemented specifically in the decision-making process could potentially address some of the issues related to workload, stress, performance, and trust. This study sought to explore the effects of different LOAs on human–machine team coordination, performance, trust, and decision making in hand with assessments of operator workload and stress in a simulated multi-unmanned aircraft vehicle (UAV) intelligence surveillance and reconnaissance (ISR) task. The results of the study can be used to identify human factor roadblocks to effective HAT and provide guidance for future designs of HAT. Additionally, the unique impacts of LOA and autonomous decision making by agents on trust are explored.
This study investigated how and why trust changes over time between differing levels of autonomy (LOA) for an intelligence, surveillance, and reconnaissance (ISR) task involving four unmanned aerial vehicles (UAV). Four LOAs were implemented in the study ranging from low to high (manual, advice, consent, veto). The analysis examined if trust increased, decreased, or stayed the same during transitions between two different LOAs. Through a thematic analysis, themes influencing the participant’s trust in agents were identified. Current findings suggest trust increases most when a participant moves between moderate LOAs. Specifically, moving from the condition in which the role of the agent is to provide advice, to the agent making decisions on their own with the consent of the human. Trust was also shown to increase when the human moved from the highest LOA (human merely vetoes) to the manual condition. The most prevalent themes influencing trust included perceived agent errors and false alarms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.