2018
DOI: 10.1007/978-3-319-91238-7_39
|View full text |Cite
|
Sign up to set email alerts
|

A Model for Regulating of Ethical Preferences in Machine Ethics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…On the other hand, as long as the common interests of the team are met, it will not cause dissatisfaction if some of the individual’s interests cannot be satisfied, meaning that individuals will adjust their moral decisions according to the behavioral decisions that result from the moral types of their teammates ( Baumard et al, 2013 ; Bostyn and Roets, 2017 ); that is, the team has a constraining/limiting effect on the behavior of individuals. Human–computer interaction research generally selects typical moral (utilitarian and deontology) populations for research ( Baniasadi et al, 2018a ; Liu and Liu, 2021 ; Yokoi and Nakayachi, 2021 ). When people think that robots are reliable and trustworthy, human–computer interaction goes more smoothly.…”
Section: Literature Review and Theoretical Hypothesismentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, as long as the common interests of the team are met, it will not cause dissatisfaction if some of the individual’s interests cannot be satisfied, meaning that individuals will adjust their moral decisions according to the behavioral decisions that result from the moral types of their teammates ( Baumard et al, 2013 ; Bostyn and Roets, 2017 ); that is, the team has a constraining/limiting effect on the behavior of individuals. Human–computer interaction research generally selects typical moral (utilitarian and deontology) populations for research ( Baniasadi et al, 2018a ; Liu and Liu, 2021 ; Yokoi and Nakayachi, 2021 ). When people think that robots are reliable and trustworthy, human–computer interaction goes more smoothly.…”
Section: Literature Review and Theoretical Hypothesismentioning
confidence: 99%
“…Utilitarian moral individuals are more likely to rely on objects whose behavior is predictable than those of deontological moral types. Besides, studies on human–computer interaction generally select typical moral types (utilitarian and deontology) to conduct research ( Baniasadi et al, 2018b ; Sivill, 2019 ; de Melo et al, 2021 ; Yokoi and Nakayachi, 2021 ; Nijssen et al, 2022 ; Vianello et al, 2022 ). Therefore, our study uses two typical moral types (utilitarian and deontological) to study the impact of the moral types of people and autonomous vehicles on the trust in autonomous vehicles.…”
Section: Literature Review and Theoretical Hypothesismentioning
confidence: 99%