2020
DOI: 10.1109/thms.2019.2947592
|View full text |Cite
|
Sign up to set email alerts
|

Individual Differences in Trust in Autonomous Robots: Implications for Transparency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(46 citation statements)
references
References 52 publications
0
45
0
1
Order By: Relevance
“…Generally, explanations contribute to transparency; as it is defined as the provision of information to help the human understand various aspects of agent functioning [27]. A recent study suggests that transparency should be compatible with the user's mental model of the system in order to support accurate trust calibration [31]. A mental model is an internal representation in the mind of one actor about the characteristics of another actor [59].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally, explanations contribute to transparency; as it is defined as the provision of information to help the human understand various aspects of agent functioning [27]. A recent study suggests that transparency should be compatible with the user's mental model of the system in order to support accurate trust calibration [31]. A mental model is an internal representation in the mind of one actor about the characteristics of another actor [59].…”
Section: Discussionmentioning
confidence: 99%
“…Different forms of transparency might be needed dependent on whether the humans representation of the system concerns an advanced tool or a teammate. Accordingly, personalized feedback that highlights either the machine's data-analytic capabilities (advanced tool) or its humanlike social functioning (teammate) provides a strategy for trust management [31]. In that sense, an explanation is far more complex than an expression of regret, as there is a wider range of possible underlying messages of the explanation and the way they are articulated.…”
Section: Discussionmentioning
confidence: 99%
“…The variable perceived comprehensibility is a novel trust antecedent that has become increasingly important since humans interact with intelligent and autonomous systems (Matthews et al 2020)as the efforts of governments and companies to explainable AI underline (Cutler, Pribi c, and Humphrey 2019;Fjeld and Nagy 2020). The effect on trust is not as strong as the one of perceived ability: when perceived comprehensibility increases by one standard deviation, trust rises 0.174 standard deviationsassuming all other antecedents stay constant.…”
Section: Discussionmentioning
confidence: 99%
“…Similar to the trust model for interaction with an intelligent, autonomous robot of Matthews et al (2020), we assumed that not all of our six antecedents directly influence the trust, but get mediated through other antecedents. Thus, we employed the (probably) most generic expression of a mediation model (Baron and Kenny 1986), the SOR model of Woodworth (1926), to structure our antecedents.…”
Section: Hypotheses Developmentmentioning
confidence: 99%
“…Therefore, the presentation of robots to users (i.e. perceived design features) can also have an impact on whether the public trusts the system or not [37,38]. The next section discusses the trusting intentions towards robots.…”
Section: Intelligent Agent Definitionmentioning
confidence: 99%