2020
DOI: 10.1177/0018720820901629
|View full text |Cite
|
Sign up to set email alerts
|

Trusting Autonomous Security Robots: The Role of Reliability and Stated Social Intent

Abstract: Objective This research examined the effects of reliability and stated social intent on trust, trustworthiness, and one’s willingness to endorse use of an autonomous security robot (ASR). Background Human–robot interactions in the domain of security is plausible, yet we know very little about what drives acceptance of ASRs. Past research has used static images and game-based simulations to depict the robots versus actual humans interacting with actual robots. Method A video depicted an ASR interacting with a h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 36 publications
(34 citation statements)
references
References 37 publications
0
20
0
1
Order By: Relevance
“…Depending on social robots’ human‐like communication behavior (e.g., politeness, benevolence, voice pitch; Lee et al., 2017; Lyons et al., 2020; Zhu & Kaber, 2012) and the message content (i.e., self‐disclosure; Johanson et al., 2019), consumers engage more with the robots and find them less intimidating and more trustworthy (Lyons et al., 2020). If the robot's human‐like behavior evokes perceived intelligence and human‐likeness in consumers, it contributes to building consumer rapport and hospitality experiences (Qiu et al., 2020).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Depending on social robots’ human‐like communication behavior (e.g., politeness, benevolence, voice pitch; Lee et al., 2017; Lyons et al., 2020; Zhu & Kaber, 2012) and the message content (i.e., self‐disclosure; Johanson et al., 2019), consumers engage more with the robots and find them less intimidating and more trustworthy (Lyons et al., 2020). If the robot's human‐like behavior evokes perceived intelligence and human‐likeness in consumers, it contributes to building consumer rapport and hospitality experiences (Qiu et al., 2020).…”
Section: Resultsmentioning
confidence: 99%
“…Depending on social robots' human-like communication behavior (e.g., politeness, benevolence, voice pitch; Lee et al, 2017;Lyons et al, 2020;Zhu & Kaber, 2012) and the message content (i.e., self-disclosure; Johanson et al, 2019), consumers engage more with the robots and find them less intimidating and more trustworthy (Lyons et al, 2020).…”
Section: Behaviormentioning
confidence: 99%
“…Studies examining trustworthiness in HRI have identified unique relationships for each of these elements. For example, [25] examined the impacts that a robot's reliability and social intent had on ability, benevolence and integrity. This study found reliability to significantly impact ability and integrity but not benevolence, while social intent influenced integrity and benevolence but not ability.…”
Section: A Trustworthiness and Hrimentioning
confidence: 99%
“…A study by Panganiban and colleagues [40] demonstrated that benevolent communications from an AI in the form of an autonomous wingman reduced workload and increased perceptions of teaming. Lyons [41] also found that invoking notions of selfsacrifice versus self-protection in an autonomous security robot was effective in increasing perceptions of benevolence and integrity. In summary, while research is growing in this space and novel methods to convey complex concepts such as benevolence and integrity are being developed and tested, using these constructs within the confines of virtue ethics for AI-based systems remains a challenge because the context needs to offer an opportunity for benevolence or integrity to manifest and these constructs require deliberate design considerations.…”
Section: Agent Perspectivementioning
confidence: 97%