2022
DOI: 10.1007/s12369-022-00871-4
|View full text |Cite
|
Sign up to set email alerts
|

Trust Development in Military and Civilian Human–Agent Teams: The Effect of Social-Cognitive Recovery Strategies

Abstract: Autonomous agents (AA) will increasingly be deployed as teammates instead of tools. In many operational situations, flawless performance from AA cannot be guaranteed. This may lead to a breach in the human’s trust, which can compromise collaboration. This highlights the importance of thinking about how to deal with error and trust violations when designing AA. The aim of this study was to explore the influence of uncertainty communication and apology on the development of trust in a Human–Agent Team (HAT) when… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 70 publications
0
8
0
Order By: Relevance
“…Ahead of the development of any structured model of trust as a social construct for HRI, a range of studies have nonetheless demonstrated user trust can be influenced through robots' simulated social or affective interactions. Examples include using rhetorical persuasion [14], using gesture and expression [15], taking blame [16] or offering apologies for errors [18], [19], and making promises to change behaviours [20]. Emerging social models, drawing from such examples, argue that trust as a social construct, as seen in human-human interaction [1], [3], has relevance in HRI [18], [21].…”
Section: A Social Roboticsmentioning
confidence: 99%
“…Ahead of the development of any structured model of trust as a social construct for HRI, a range of studies have nonetheless demonstrated user trust can be influenced through robots' simulated social or affective interactions. Examples include using rhetorical persuasion [14], using gesture and expression [15], taking blame [16] or offering apologies for errors [18], [19], and making promises to change behaviours [20]. Emerging social models, drawing from such examples, argue that trust as a social construct, as seen in human-human interaction [1], [3], has relevance in HRI [18], [21].…”
Section: A Social Roboticsmentioning
confidence: 99%
“…In the long run, AI will enable machines to match or outperform critical aspects of human intelligence, allowing them to adapt to unexpected and changing situations regardless of human intervention. In many operational scenarios, such as the military [15], autonomous intelligent agents and humans are used concurrently to achieve common goals, implying that they are expected to make decisions together. Coordination between agents and humans is found to be more important than providing independent functions in operations [15].…”
Section: B Humanised Intelligencementioning
confidence: 99%
“…In many operational scenarios, such as the military [15], autonomous intelligent agents and humans are used concurrently to achieve common goals, implying that they are expected to make decisions together. Coordination between agents and humans is found to be more important than providing independent functions in operations [15]. Agents work in uncertain areas when information is lacking, and errors undermine human trust.…”
Section: B Humanised Intelligencementioning
confidence: 99%
See 1 more Smart Citation
“…The two buildings are designed to be similar, but include different details. While searching the buildings, participants are accompanied by a drone that tells them whether it detects threats in the environment by advising them to either move carefully or to proceed normally via automated audio messages (see [5]). Level of trust in the drone is repeatedly measured using a virtual slider in the VR environment [4], [6].…”
Section: Vr Environmentmentioning
confidence: 99%