2020
DOI: 10.1007/s00146-020-00997-x
|View full text |Cite
|
Sign up to set email alerts
|

The hard problem of AI rights

Abstract: In the past few years, the subject of AI rights-the thesis that AIs, robots, and other artefacts (hereafter, simply 'AIs') ought to be included in the sphere of moral concern-has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress-namely, a lack of a solution to the 'Hard Problem' of consciousness: the problem of explaining why certain brain states give rise to experience. To motiva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(18 citation statements)
references
References 37 publications
0
18
0
Order By: Relevance
“…These scholars defend various criteria as crucial for determining whether artificial entities warrant moral consideration. Sentience or consciousness seem to be most frequently invoked (Andreotta, 2020 ; Bostrom, 2014 ; Himma, 2003 ; Johnson & Verdicchio, 2018 ; Mackenzie, 2014 ; Mosakas, 2020 ; Tomasik, 2014 ; Torrance, 2008 ; Yampolskiy, 2017 ), but other proposed criteria include the capacities for interests (Basl, 2014 ; Neely, 2014 ), autonomy (Calverley, 2011 ; Gualeni, 2020 ), self-control (Wareham, 2013 ), rationality (Laukyte, 2017 ), integrity (Gualeni, 2020 ), dignity (Bess, 2018 ), moral reasoning (Malle, 2016 ), and virtue (Gamez et al, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…These scholars defend various criteria as crucial for determining whether artificial entities warrant moral consideration. Sentience or consciousness seem to be most frequently invoked (Andreotta, 2020 ; Bostrom, 2014 ; Himma, 2003 ; Johnson & Verdicchio, 2018 ; Mackenzie, 2014 ; Mosakas, 2020 ; Tomasik, 2014 ; Torrance, 2008 ; Yampolskiy, 2017 ), but other proposed criteria include the capacities for interests (Basl, 2014 ; Neely, 2014 ), autonomy (Calverley, 2011 ; Gualeni, 2020 ), self-control (Wareham, 2013 ), rationality (Laukyte, 2017 ), integrity (Gualeni, 2020 ), dignity (Bess, 2018 ), moral reasoning (Malle, 2016 ), and virtue (Gamez et al, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
“…Attributions of “[m]oral blame positively correlated with agency, whereas moral consideration positively correlated with experience” Al-Fedaghi ( 2007 ) Al-Fedaghi takes Floridi’s Information Ethics further “by conferring moral value on personal information itself” and “moral consideration to the well-being of any personal information based on the moral concern for the welfare of its proprietor” Allen and Widdison ( 1996 ) Allen and Widdison consider computer contracts and legal personality from the perspective of protecting the computer’s users, including for convenience reasons. They encourage making some computer-generated agreements enforceable for the sake of “commercial pragmatism.” The legal precedent of personhood for other entities is considered, with autonomy being the relevant criterion; they see legal personality as “legally appropriate” at “a point” in the future Anderson ( 2012 ) Referring to “a family of theories we will refer to as ‘functional intentionality,’” Anderson argues that a machine “must first be shown to possess a particular moral status before it is a candidate for having genuine intentionality” Andreotta ( 2020 ) Andreotta argues that consciousness is a more important criterion for grounding “AI rights” than “superintelligence” or empathy. Andreotta argues that, “AIs can and should have rights—but only if they have the capacity for consciousness.” The “Hard Problem” of consciousness is seen as a key epistemic problem impeding “the AI rights research program” Armstrong et al ( 2012 ) Armstrong, Sandberg, and Bostrom look an ’Oracle AI” approach to solving various AI issues.…”
Section: Appendixmentioning
confidence: 99%
“…2 But with which status, and under what normative conditions? Controversies about the moral and legal status of robots in general, and of humanoid (anthropomorphic) robots in particular, are among the top debates in recent practical philosophy and legal theory (Danaher 2017a;Gunkel 2018;Bryson 2019;Dignum 2019;Basl 2019;Nyholm 2020;Wong and Simon 2020;Andreotta 2020). Quite obviously, the state of the art in robotics and the rapid further development of Artificial Intelligence (AI) raise moral and legal issues that significantly exceed the horizon of classic normative theory building (Behdadi and Munthe 2020).…”
Section: Introductionmentioning
confidence: 99%
“…As they stand, however, legal systems by-and-large do not grant legal protection to artificially intelligent systems. On the one hand, this seems intuitive, given that artificially intelligent systems, even the most state-of-the-art ones, do not seem to be capable of feeling pleasure or pain and thus are not eligible for legal consideration (Nevejans, 2016;Bryson et al, 2017;Chesterman, 2020;Andreotta, 2021; but see; Asada, 2019;Shulman and Bostrom, 2021;Galipó et al, 2018). On the other hand, scholars often conclude that artificially intelligent systems with the capacity to feel pleasure and pain will be created, or are at least theoretically possible (Thompson 1965;Aleksander 1996;Blackmore 1999;Buttazzo 2001;Franklin 2003;Harnad 2003;Holland 2007;Chrisley 2008;Seth 2009;Haikonen 2012;Bringsjord et al, 2015;Angel 2019).…”
Section: Introductionmentioning
confidence: 99%