2016
DOI: 10.1007/978-3-319-31413-6_18
|View full text |Cite
|
Sign up to set email alerts
|

Reflections on the Design Challenges Prompted by Affect-Aware Socially Assistive Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Apart from discrimination, dehumanization, and deception, which represent phenomena that are potentially relevant for all types of robots involved in HRI, some authors suggest that there are specific ethical issues related to socially assistive robots (SAR), in particular (e.g., [73]). They propose that these issues are unique to SAR due to their more social nature compared with other types of robots.…”
Section: Hri-specific Ethical Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…Apart from discrimination, dehumanization, and deception, which represent phenomena that are potentially relevant for all types of robots involved in HRI, some authors suggest that there are specific ethical issues related to socially assistive robots (SAR), in particular (e.g., [73]). They propose that these issues are unique to SAR due to their more social nature compared with other types of robots.…”
Section: Hri-specific Ethical Challengesmentioning
confidence: 99%
“…SAR are defined as a class of robots between "assistive robotics (robots that provide assistance to a user) and socially interactive robotics (robots that communicate with a user through social and nonphysical interaction)" [74 , p. 25]. Wilson et al [73] suggest the following ethical issues are particularly relevant for social robots: A respect for social norms, the robot being able to make decisions about competing obligations, building and maintaining trust between robot and user, the potential problem of social manipulation and deception by the robot, and the issue of blame and justification, especially if something goes wrong [73]. As the task of building and maintaining trust between robots and users is an important ethical factor in the contexts of socially assistive robots [73], there are trust-based approaches to ethical social robots.…”
Section: Hri-specific Ethical Challengesmentioning
confidence: 99%
“…To achieve MU, two approaches are necessary; on the one hand, the robots should be able to demonstrate more readable and transparent behavior that would increase interpretability and anticipation by the human users (Wallkötter et al, 2021 ), and on the other hand, the robots should be equipped with high-level cognitive skills to interpret the human partner's global states (needs, intentions, emotional states) and react to them accordingly. Specifically, the robot's emotional system must be built both in terms of affective elicitation and sensing (Wilson et al, 2016 ). This can be accomplished by creating robots capable of eliciting emotions through humanized social actions and interpreting human emotions in the same way that a human partner would.…”
Section: Introductionmentioning
confidence: 99%
“…Robotic deception has been an important topic in HRI [1]. Some researchers are concerned about the possible harmful impacts of deception in social robots [9][10][11]. One of the concerns is that users might overtrust a robot's capabilities and allow the robot to make unqualified decisions [9].…”
Section: Introductionmentioning
confidence: 99%
“…One of the concerns is that users might overtrust a robot's capabilities and allow the robot to make unqualified decisions [9]. Moreover, Wilson et al are concerned that robot deception may damage humanrobot trust and can even lead to manipulation, especially for aging high-risk populations [11].…”
Section: Introductionmentioning
confidence: 99%