2016
DOI: 10.1177/1541931213601031
|View full text |Cite
|
Sign up to set email alerts
|

Application of a System-Wide Trust Strategy when Supervising Multiple Autonomous Agents

Abstract: When interacting with complex systems, the manner in which an operator trusts automation influences system performance. Recent studies have demonstrated that people tend to apply trust broadly rather than exhibiting specific trust in each component of the system in a calibrated manner (e.g. Keller & Rice, 2010). While this System–Wide Trust effect has been established for basic situations such as judging gauges, it has not been studied in realistic settings such as collaboration with autonomous agents in a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
23
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(24 citation statements)
references
References 18 publications
1
23
0
Order By: Relevance
“…At last, consistent with findings from previous studies, our results showed that as the automated threat detector became more reliable, participants' trust in and dependence on the threat detector increased, and their dual-task performance improved (Neyedli et al, 2011;Walliser et al, 2016;Wang et al, 2009). conclusIon Although disclosing likelihood information has been proposed as a design solution to promote proper trust and dependence, and to enhance human-automation team performance, prior studies showed mixed results (Bagheri & Jamieson, 2004;Dzindolet et al, 2002;Fletcher et al, 2017;Walliser et al, 2016;Wang et al, 2009). The goal of this study was to experimentally examine the effects of presenting different types of likelihood information.…”
Section: Performancesupporting
confidence: 91%
See 2 more Smart Citations
“…At last, consistent with findings from previous studies, our results showed that as the automated threat detector became more reliable, participants' trust in and dependence on the threat detector increased, and their dual-task performance improved (Neyedli et al, 2011;Walliser et al, 2016;Wang et al, 2009). conclusIon Although disclosing likelihood information has been proposed as a design solution to promote proper trust and dependence, and to enhance human-automation team performance, prior studies showed mixed results (Bagheri & Jamieson, 2004;Dzindolet et al, 2002;Fletcher et al, 2017;Walliser et al, 2016;Wang et al, 2009). The goal of this study was to experimentally examine the effects of presenting different types of likelihood information.…”
Section: Performancesupporting
confidence: 91%
“…Yet, results from these studies seem to be inconsistent. Some studies revealed that the likelihood information significantly helped human operators calibrate their trust, adjust their reliance and compliance behaviors, and enhance human-automation team performance (McGuirl & Sarter, 2006;Walliser, de Visser, & Shaw, 2016;Wang, Jamieson, & Hollands, 2009). Other studies, however, reported that human operators did not trust or depend on automated decision aids appropriately even when the likelihood information was disclosed (Bagheri & Jamieson, 2004;Fletcher, Bartlett, Cockshell, & McCarley, 2017).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, adding humans to a decision aid may help with acceptance of automated advice when operators have a tendency to reject the recommendations of an accurate automated aid (Cummings, Clare, & Hart, 2010; Dzindolet et al, 2001). Finally, mixing sources of advice within a human-machine team could further help in diminishing system-wide trust , an effect where operators generalize the incorrect advice of one aid to other identical aids (Keller & Rice, 2010; Walliser, de Visser, & Shaw, 2016). Taken together, these suggestions could help improve human-machine team performance using group composition as deliberate design variable to steer group consensus in beneficial directions that help improve group decision making.…”
Section: Discussionmentioning
confidence: 99%
“…The first part of the discussion will focus on the strategies operators employ when interacting with multiple systems. Recent studies have demonstrated that people tend to apply trust broadly rather than exhibiting specific trust in each component of a system in a calibrated manner (Keller & Rice, 2010;Walliser et al, 2016). This can be problematic because this means that a single, unreliable agent or component of a system can create a pull-down effect such that the rate of operator trust in other functionally similar agents or components is lowered (Keller & Rice, 2010).…”
Section: Trust Calibration and Trust Repair Tyler Shawmentioning
confidence: 99%