Research suggests that humans and autonomous agents can be more effective when working together as a combined unit rather than as individual entities. However, most research has focused on autonomous agent design characteristics while ignoring the importance of social interactions and team dynamics. Two experiments examined how the perception of teamwork among human–human and human–autonomous agents and the application of team building interventions could enhance teamwork outcomes. Participants collaborated with either a human or an autonomous agent. In the first experiment, it was revealed that manipulating team structure by considering your human and autonomous partner as a teammate rather than a tool can increase affect and behavior, but does not benefit performance. In the second experiment, participants completed goal setting and role clarification (team building) with their teammate prior to task performance. Team building interventions led to significant improvements for all teamwork outcomes, including performance. Across both studies, participants communicated more substantially with human partners than they did with autonomous partners. Taken together, these findings suggest that social interactions between humans and autonomous teammates should be an important design consideration and that particular attention should be given to team building interventions to improve affect, behavior, and performance.
When interacting with complex systems, the manner in which an operator trusts automation influences system performance. Recent studies have demonstrated that people tend to apply trust broadly rather than exhibiting specific trust in each component of the system in a calibrated manner (e.g. Keller & Rice, 2010). While this System–Wide Trust effect has been established for basic situations such as judging gauges, it has not been studied in realistic settings such as collaboration with autonomous agents in a multi-agent system. This study utilized a multiple UAV control simulation, to explore how people apply trust in multi autonomous agents in a supervisory control setting. Participants interacted with four UAVs that utilized automated target recognition (ATR) systems to identify targets as enemy or friendly. When one of the autonomous agents was inaccurate and performance information was provided, participants were 1) less accurate, 2) more likely to verify the ATR’s determination, 3) spent more time verifying images, and 4) rated the other systems as less trustworthy even though they were 100% correct. These findings support previous work that demonstrated the prevalence of system-wide trust and expand the conditions in which system-wide trust strategies are applied. This work suggests that multi-agent systems should provide carefully designed cues and training to mitigate the system-wide trust effect.
Among groups of humans, the team structure has been argued to be the most effective way for people to organize to accomplish work. Research suggests that humans and autonomous agents can be more effective when working together. However, the drive toward capable autonomous teammates has focused on design characteristics while ignoring the importance of social interactions between teammates. In the present study we created team structure though task interdependence and observed teamwork outcomes in the form of affect, behavior, and performance outcomes. A team structure resulted in improved affect and performance outcomes relative to a non-team structure. However, team structure did not elicit significant behavioral differences. Human partners received higher affect ratings and elicited significantly more communication from the participant than an autonomous partner. These findings suggest that social interactions between humans and autonomous teammates should be an important design consideration. While the current data is promising, team structure alone may not be sufficient to ensure effective teams, so further research should explore the utility of team development interventions between humans and autonomous agents.
Operators generalize their trust across all of the autonomous agents they are working with, a phenomenon referred to as System Wide Trust (SWT). As a result, the failure of one aid can cause a trust decrement in —and therefore disuse of— all other competent aids within the system. This study explored two possible SWT mitigation strategies: competence transparency and different appearance of aids. Previous research has shown that transparency and feedback affects trust calibration in systems (Walliser et al., 2016), yet our appearance manipulation is relatively novel, using gestalt principles of grouping to explore whether heterogeneous aids will be more easily differentiated compared to homogenous aids. Participants supervised four UAVs that identified targets as enemies or friendlies. Only one of these UAVs was inaccurate (70% recommended accuracy), which caused a SWT trust decrement for all UAVs. We expected that the heterogeneous UAVs with competence transparency would suffer the least SWT effect, yet the results did not find a difference between conditions. These findings suggest that the System Wide Trust effect is too strong to be affected by our manipulations and that further research on mitigation strategies is required.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.