2011 Ro-Man 2011
DOI: 10.1109/roman.2011.6005228
|View full text |Cite
|
Sign up to set email alerts
|

Recognizing situations that demand trust

Abstract: Abstract-This article presents an investigation into the theoretical and computational aspects of trust as applied to robots. It begins with an in-depth review of the trust literature in search of a definition for trust suitable for implementation on a robot. Next we apply the definition to our interdependence framework for social action selection and develop an algorithm for determining if an interaction demands trust on the part of the robot. Finally, we apply our algorithm to several canonical social situat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 16 publications
0
14
0
Order By: Relevance
“…In high-stress environments, human agents may be even less likely to proactively interact with automation, particularly when experiencing physical and cognitive fatigue (Casper & Murphy, 2003). Although trust in automation has been said to be asymmetrical, in that automation does not trust back (Lee & See, 2004), recent work with more autonomous automation (e.g., agents or robots) suggests a more symmetrical view, and social exchange situations may apply to the concerns raised by resilience engineering (Bray, Anumandla, & Thibeault, 2012; Fink & Weyer, 2014; Wagner & Arkin, 2011). In cooperative exchange, people often choose partners depending on the instrumental value of the exchange, even though people’s trust, affective regard, and sense of solidarity with exchange partners are strongly influenced by the symbolic act of reciprocity (Molm, Schaefer, & Collett, 2007).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In high-stress environments, human agents may be even less likely to proactively interact with automation, particularly when experiencing physical and cognitive fatigue (Casper & Murphy, 2003). Although trust in automation has been said to be asymmetrical, in that automation does not trust back (Lee & See, 2004), recent work with more autonomous automation (e.g., agents or robots) suggests a more symmetrical view, and social exchange situations may apply to the concerns raised by resilience engineering (Bray, Anumandla, & Thibeault, 2012; Fink & Weyer, 2014; Wagner & Arkin, 2011). In cooperative exchange, people often choose partners depending on the instrumental value of the exchange, even though people’s trust, affective regard, and sense of solidarity with exchange partners are strongly influenced by the symbolic act of reciprocity (Molm, Schaefer, & Collett, 2007).…”
Section: Introductionmentioning
confidence: 99%
“…Existing approaches to human-automation cooperation and resilience consider how automation can facilitate collaboration with people in dynamic environments (Allen, Guinn, & Horvitz, 1999; Fong et al, 2005; Wagner & Arkin, 2011; Woods, Tittle, Feil, & Roesler, 2004; Zieba, Polet, Vanderhaegen, & Debernard, 2009). Other studies address the effects of socially sensitive automation, such as developing trust through conversational cues, appearance, and behavior (Cassell & Bickmore, 2000; Desteno et al, 2012; Robinette, Wagner, & Howard, 2013), or engaging in good or poor etiquette (Parasuraman & Miller, 2004; Takayama, Groom, & Nass, 2009).…”
Section: Introductionmentioning
confidence: 99%
“…This definition of trust is accepted and used by many studies on trust in HRI. Wagner et al [61] also provided a comprehensive definition for trust: "a belief, held by the trustor, that the trustee will act in a manner that mitigates the trustor's risk in a situation in which the trustor has put its outcomes at risk ". They also provided a model for determining if an interaction demands trust or not.…”
Section: Definition Of Trust In Hrimentioning
confidence: 99%
“…Our previous research has explored methods that allow a robot to iteratively learn a mental model through successive * interaction with its human partner [4]. Our work, as well as the research of others [5,6], has come to the conclusion that this process of creating mental models of humans is critical for behavior prediction [7], determining if a person or robot is being deceptive [8] and whether or not a person is trustworthy [9].…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, the algorithm for cluster-based stereotyping developed here provides the information necessary to create the outcome matrix representation of interaction. Recently we have demonstrated that outcome matrices can be used by a robot or agent to reason about deception [8] and about trust [9]. It was assumed, but not shown, that these representations of interaction could be created from the perceptual information available to a robot.…”
Section: Introductionmentioning
confidence: 99%