Proceedings of the 5th International Conference on Human Agent Interaction 2017
DOI: 10.1145/3125739.3125751
|View full text |Cite
|
Sign up to set email alerts
|

Trust Lengthens Decision Time on Unexpected Recommendations in Human-agent Interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
3
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 14 publications
1
3
0
Order By: Relevance
“…Severity of the outcome affected compliance with robot requests (Salem et al, 2015 ). Similar effects were found by Tokushige et al ( 2017 ) as a result of unexpected recommendations.…”
Section: A Unified Information Processing Model For User Centered Faisupporting
confidence: 87%
“…Severity of the outcome affected compliance with robot requests (Salem et al, 2015 ). Similar effects were found by Tokushige et al ( 2017 ) as a result of unexpected recommendations.…”
Section: A Unified Information Processing Model For User Centered Faisupporting
confidence: 87%
“…All the numbers reported are as of beginning of January 2021 4. We found several keywords including automated decision aid (e.g.,[91]), AI-based decision support system (e.g.,[24]), intelligent assistant (e.g.,[2]), intelligent agent (e.g.,[214]), classifier (e.g.,[237]), etc 5. Henceforth, all reported percentages are rounded up to the nearest tenth.Proc.…”
mentioning
confidence: 99%
“…For example, while trusting belief usually has more emphasis on integrity of the trustee, trusting behavior focuses more on integrity and benevolence of the trustee [18]. There have been early results that suggest a mismatch between trust beliefs and trust behavior [28], which needs further investigation. We mainly focused on lack of accuracy as a cause for trust breakdown and improved accuracy as a form of trust recovery.…”
Section: Hypothesis Resultsmentioning
confidence: 99%
“…If the system is indeed unreliable or inaccurate, the user takes longer to decide whether to follow the system's advice [28]. In robots, Desai et al [6] found that early unreliability had a greater impact on trust formation than unreliability later on.…”
Section: Trust and Accuracymentioning
confidence: 99%