2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8968564
|View full text |Cite
|
Sign up to set email alerts
|

Robots that Take Advantage of Human Trust

Abstract: Humans often assume that robots are rational. We believe robots take optimal actions given their objective; hence, when we are uncertain about what the robot's objective is, we interpret the robot's actions as optimal with respect to our estimate of its objective. This approach makes sense when robots straightforwardly optimize their objective, and enables humans to learn what the robot is trying to achieve. However, our insight is that-when robots are aware that humans learn by trusting that the robot actions… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…Alternatively, blind or excessive trust in a system could result in an intelligent machine exploiting a human's trust to achieve an unknown objective. 25 Insufficiently trusting a system, however, could mean ignoring the system completely when it might know pertinent information. The same concepts apply to an autonomous system over-and under-trusting its human team members.…”
Section: Human-robot Interactionmentioning
confidence: 99%
“…Alternatively, blind or excessive trust in a system could result in an intelligent machine exploiting a human's trust to achieve an unknown objective. 25 Insufficiently trusting a system, however, could mean ignoring the system completely when it might know pertinent information. The same concepts apply to an autonomous system over-and under-trusting its human team members.…”
Section: Human-robot Interactionmentioning
confidence: 99%
“…Their human-subject study showed that purely maximizing trust in a human-robot team may not improve team performance. Losey and Sadigh [16] modeled human-robot interaction as a two-player POMDP where the human does not know the robot's objective. They proposed 4 ways for the robot to formulate the human's perception of the robot's objective and showed the robot will be more communicative if it assumes the human trusts itself and thus increase the human's involvement.…”
Section: Trust-aware Decision Making In Hrimentioning
confidence: 99%