2017
DOI: 10.1192/apt.bp.115.015321
|View full text |Cite
|
Sign up to set email alerts
|

Moral responsibility in psychopathy: A clinicophilosophical case discussion

Abstract: SummaryThis article examines the concept of moral responsibility in psychopathy. In doing so it shows how philosophical ideas can be used to help approach a complex issue in psychiatry. Building on a fictitious case, we explore two arguments: the exempting view, which proposes that psychopaths lack any ability to function as moral agents; and the mitigating view, which concedes that there are impairments in moral understanding in psychopathy, but takes these to be insufficient to be completely exempting, inste… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…This is also referred to as the 'intelligence explosion' argument. 28 As a general note, we should stress that the justification or refutation of these or other fears is not part of our claim in any way, nor is the claim that these fears or their causes can or cannot be mitigated when needed. The sole purpose of outlining these fears is to acknowledge and detail the reasons some or most people may have for fearing advanced artificial agents, as a way of explaining a possible reluctance of human society to grant them full moral agency.…”
Section: What Is Required Of Artificial Agents To Become Moral Agents?mentioning
confidence: 91%
See 1 more Smart Citation
“…This is also referred to as the 'intelligence explosion' argument. 28 As a general note, we should stress that the justification or refutation of these or other fears is not part of our claim in any way, nor is the claim that these fears or their causes can or cannot be mitigated when needed. The sole purpose of outlining these fears is to acknowledge and detail the reasons some or most people may have for fearing advanced artificial agents, as a way of explaining a possible reluctance of human society to grant them full moral agency.…”
Section: What Is Required Of Artificial Agents To Become Moral Agents?mentioning
confidence: 91%
“…26 See also ]. 27 See [35: §4] 28 For the intelligence explosion argument, see its original presentation in Good [36], and interesting discussions in [37,38] or on MIRI's website: https:// intel ligen ce. org/ ie-faq/, Accessed 26-Oct-2022.…”
Section: What Is Required Of Artificial Agents To Become Moral Agents?mentioning
confidence: 99%