2014
DOI: 10.1142/s1793843014400162
|View full text |Cite
|
Sign up to set email alerts
|

Moral Agency, Moral Responsibility, and Artifacts: What Existing Artifacts Fail to Achieve (and Why), and Why They, Nevertheless, Can (and Do!) Make Moral Claims upon Us

Abstract: This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As before, we take moral agency to be that condition in which an agent can appropriately be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and speci¯cally moral context and embodied in a suitable physical for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 32 publications
0
13
0
Order By: Relevance
“…In particular, several recent writers have focused their attention on the question whether artificial systems can be held morally responsible for their activity (Coeckelbergh, 2020 ; Parthemore & Withby, 2014 ; Stahl, 2006 ; Sullins, 2006 ). Often, when the agent of a morally significant action is an adult human being, we hold them responsible for the action itself and its outcomes.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, several recent writers have focused their attention on the question whether artificial systems can be held morally responsible for their activity (Coeckelbergh, 2020 ; Parthemore & Withby, 2014 ; Stahl, 2006 ; Sullins, 2006 ). Often, when the agent of a morally significant action is an adult human being, we hold them responsible for the action itself and its outcomes.…”
Section: Introductionmentioning
confidence: 99%
“…Autonomy is often associated with responsibility, in the AI as well as philosophical literature (e.g., Asaro 2016, Matheson 2012, Parthemore and Whitby 2014. The rise of autonomous machines therefore raises the possibility that they, rather than a human designer, will be responsible for their actions.…”
Section: Responsibilitymentioning
confidence: 99%
“…The relationship between concepts and consciousness I discuss elsewhere (Parthemore & Whitby, 2013, 2012; Parthemore, 2011a). 8 It should be apparent that, given my approach to concepts, I see theories of concepts and theories of consciousness as closely intertwined; it is difficult to imagine offering a theory of consciousness that does not address the conceptually structured nature of that consciousness: for no one, it seems (excepting perhaps the panpsychists), 9 would deny that conscious thought is so structured.…”
Section: Concepts and Enactionmentioning
confidence: 99%
“… 20. At the same time, details of that algorithm, along with its empirical application, lie outside the scope of this paper; instead, see Parthemore and Whitby (2012, 2013) and Parthemore and Morse (2010); Parthemore (2011a). As the references show, the algorithm and its software implementation – in the form of a mind-mapping program – continue to evolve, just as they predict any conceptual framework (and any concept within that framework) should continue to evolve.…”
mentioning
confidence: 99%