2019
DOI: 10.1016/j.socec.2019.02.010
|View full text |Cite
|
Sign up to set email alerts
|

Sharing responsibility with a machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(22 citation statements)
references
References 86 publications
0
17
1
Order By: Relevance
“…On the other hand, in a modified dictator game, no significant differences in taking responsibility were detected between human-human and humancomputer teams -though participants in human-computer teams did behave slightly more selfishly. (79). Note that by refraining from shifting responsibility towards computers, these participants forewent an apparently effective way to avoid responsibility: as an earlier study showed, observers of accidents tend to attribute less responsibility to a company if technology was involved in the accident (80).…”
Section: The Collaboration Of Humans and Automated Agents In Teamsmentioning
confidence: 96%
“…On the other hand, in a modified dictator game, no significant differences in taking responsibility were detected between human-human and humancomputer teams -though participants in human-computer teams did behave slightly more selfishly. (79). Note that by refraining from shifting responsibility towards computers, these participants forewent an apparently effective way to avoid responsibility: as an earlier study showed, observers of accidents tend to attribute less responsibility to a company if technology was involved in the accident (80).…”
Section: The Collaboration Of Humans and Automated Agents In Teamsmentioning
confidence: 96%
“…Considering the Uncanny Valley [23] and the balance of social cues and competence, researchers examined the least actual capabilities of an agent, which are to understand its teammates and to react appropriately with adequate length. Eventually, the outcome depends on the contributions of each member including the agent [30][31][32][33].…”
Section: Related Workmentioning
confidence: 99%
“…Other researchers have analyzed the differences in the definitions of ethical aspects for human-only, AI-only, or combined decision making and found that moral fault was always attributed to humans (Shank et al 2019). Kirchkamp and Strobel (2019) discovered that the feeling of guilt also does not change, while responsibility in human-AI teams is perceived as being higher than in human-only teams, and selfish-acting decreases. According to their findings, any higher form of moral responsibility is so far not attributed to machines.…”
Section: Ethical Perspectives On Using Ai In Strategic Organizationalmentioning
confidence: 99%