2021
DOI: 10.1007/978-3-030-77772-2_7
|View full text |Cite
|
Sign up to set email alerts
|

Whoops! Something Went Wrong: Errors, Trust, and Trust Repair Strategies in Human Agent Teaming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…While there is a considerable body of work on the broader area of trust in automation (e.g., alarms, robotics, and unmanned systems), there is considerably less research on trust in AI more specifically. Trust has been defined by Lee and See (2004, p. 54), as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability.” Trust is not a binary phenomenon, but rather a spectrum with a considerable gray area between trust and distrust (Roff & Danks, 2018), where trust is calibrated over time based on interactions with the system (Schaefer, et al, 2016; Rebensky, et al 2021; Yang, Schemanske et al (2021)). Trust (or, to be more specific, calibrated trust) is important for effective human–AI teaming.…”
Section: Trust In Aimentioning
confidence: 99%
“…While there is a considerable body of work on the broader area of trust in automation (e.g., alarms, robotics, and unmanned systems), there is considerably less research on trust in AI more specifically. Trust has been defined by Lee and See (2004, p. 54), as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability.” Trust is not a binary phenomenon, but rather a spectrum with a considerable gray area between trust and distrust (Roff & Danks, 2018), where trust is calibrated over time based on interactions with the system (Schaefer, et al, 2016; Rebensky, et al 2021; Yang, Schemanske et al (2021)). Trust (or, to be more specific, calibrated trust) is important for effective human–AI teaming.…”
Section: Trust In Aimentioning
confidence: 99%
“…First, they have conducted research related to the heads-up vs. heads-down display of UAS information to the operator and the impact that has on operator situation awareness and performance (Rebensky et al, 2021). They have also examined the usability of technology such as the use of augmented reality glasses during UAS operation in the field (Rebensky et al, 2023). They have also conducted research in the UAS domain related to human agent teams (HAT) such as examining how various levels of automation impact an operator's trust, workload and performance (Rebensky et al, 2022).…”
Section: Meredith Carroll: Academic Research Supporting Uas Operatorsmentioning
confidence: 99%
“…These findings suggest that trust between teammates (human and machine) is related to team performance. Related works have proposed models and guidelines for implementing trust repair in human-machine teams (de Visser et al, 2020;Rebensky et al, 2021), but call for additional empirical studies to provide validation and better explore the relationship between trust and team effectiveness.…”
Section: Trust In Human-artificial Intelligence Teamingmentioning
confidence: 99%