2019
DOI: 10.1142/11404
|View full text |Cite
|
Sign up to set email alerts
|

Consciousness and Robot Sentience

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…Scholars often conclude that artificial entities with the capacity for positive and negative experiences (i.e. sentience) will be created, or are at least theoretically possible (see, for example, Thompson, 1965;Aleksander, 1996;Buttazzo, 2001;Blackmore, 1999;Franklin, 2003;Harnad, 2003;Holland, 2007;Chrisley, 2008;Seth, 2009;Haikonen, 2012;Bringsjord et al, 2015;Reese, 2018;Anthis and Paez, 2021;Angel, 2019). Surveys of cognitive scientists (Francken et al, 2021) and artificial intelligence researchers (McDermott, 2007) suggest that many are open to this possibility.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Scholars often conclude that artificial entities with the capacity for positive and negative experiences (i.e. sentience) will be created, or are at least theoretically possible (see, for example, Thompson, 1965;Aleksander, 1996;Buttazzo, 2001;Blackmore, 1999;Franklin, 2003;Harnad, 2003;Holland, 2007;Chrisley, 2008;Seth, 2009;Haikonen, 2012;Bringsjord et al, 2015;Reese, 2018;Anthis and Paez, 2021;Angel, 2019). Surveys of cognitive scientists (Francken et al, 2021) and artificial intelligence researchers (McDermott, 2007) suggest that many are open to this possibility.…”
Section: Introductionmentioning
confidence: 99%
“…Scholars often conclude that artificial entities with the capacity for positive and negative experiences (i.e. sentience) will be created, or are at least theoretically possible (see, for example, Thompson 1965;Aleksander 1996;Buttazzo 2001;Blackmore 1999;Franklin 2003;Harnad 2003;Holland 2007;Chrisley 2008;Seth 2009;Haikonen 2012;Bringsjord et al 2015;Angel 2019) and an informal survey of Fellows of the American Association for Artificial Intelligence suggested that many were open to this possibility (McDermott 2007). Tomasik (2011), Bostrom (2014), Gloor (2016a), and Sotala and Gloor (2017) argue that the insufficient moral consideration of sentient artificial entities, such as the subroutines or simulations run by a future superintelligent AI, could lead to astronomical amounts of suffering.…”
Section: Introductionmentioning
confidence: 99%
“…A model of mind and consciousness [12] The design of a conscious machine faces formidable scientific and engineering obstacles and so one must begin with small steps. Architectures that copy models of brain function have been investigated [2], [13], [14], [15]. These architectures include distributive agents and the global workspace theory (GWT) [16], [17].…”
Section: One World or Many?mentioning
confidence: 99%
“…On the one hand, this seems intuitive, given that artificially intelligent systems, even the most state-of-the-art ones, do not seem to be capable of feeling pleasure or pain and thus are not eligible for legal consideration (Nevejans, 2016;Bryson et al, 2017;Chesterman, 2020;Andreotta, 2021; but see; Asada, 2019;Shulman and Bostrom, 2021;Galipó et al, 2018). On the other hand, scholars often conclude that artificially intelligent systems with the capacity to feel pleasure and pain will be created, or are at least theoretically possible (Thompson 1965;Aleksander 1996;Blackmore 1999;Buttazzo 2001;Franklin 2003;Harnad 2003;Holland 2007;Chrisley 2008;Seth 2009;Haikonen 2012;Bringsjord et al, 2015;Angel 2019). Furthermore, recent literature suggests that, even assuming the existence of sentient artificially intelligent systems, said systems would not be eligible for basic protection under current legal systems.…”
Section: Introductionmentioning
confidence: 99%