Global Catastrophic Risks 2008
DOI: 10.1093/oso/9780198570509.003.0021
|View full text |Cite
|
Sign up to set email alerts
|

Artificial Intelligence as a positive and negative factor in global risk

Abstract: By far the greatest danger of Artificial Intelligence (AI) is that people conclude too early that they understand it. Of course, this problem is not limited to the field of AI. Jacques Monod wrote: ‘A curious aspect of the theory of evolution is that everybody thinks he understands it’ (Monod, 1974). The problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard, as indeed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
147
0
6

Year Published

2009
2009
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 198 publications
(153 citation statements)
references
References 20 publications
0
147
0
6
Order By: Relevance
“…AGI is a dual use technology in that it will be used both for good and bad. First and foremost, if AGI realizes its potential and surpasses human intelligence, there is no doubt that it could bring significant benefits to humanity (Bostrom, 2014; Yudkowsky, 2008; 2012). Postulated benefits relate mainly to systems which exceed human intelligence and develop a capacity to respond to the panoply of issues that threaten either human health and wellbeing, the earth, or our future existence globally.…”
Section: Understanding Agimentioning
confidence: 99%
“…AGI is a dual use technology in that it will be used both for good and bad. First and foremost, if AGI realizes its potential and surpasses human intelligence, there is no doubt that it could bring significant benefits to humanity (Bostrom, 2014; Yudkowsky, 2008; 2012). Postulated benefits relate mainly to systems which exceed human intelligence and develop a capacity to respond to the panoply of issues that threaten either human health and wellbeing, the earth, or our future existence globally.…”
Section: Understanding Agimentioning
confidence: 99%
“…In this case, the system demonstrated its inability to track some morally relevant state of affairs: a rocket being “friendly” as opposed to just being of the kind usually used by allies; and because of that it was also unable to track the relevant moral reasons of the human commanders: targeting enemy rockets rather than just targeting rockets with certain material features. More recently, the use of machine-learning systems has allegedly led to misclassification of enemy and friendly tanks because the training set had many images of enemy tanks with clouds and many of friendly tanks with cloudless skies or tracking higher or lower resolution (Yudowsky, 2006 ). These systems were also tracking irrelevant properties in the training set.…”
Section: Meaningful Human Control: Tracking and Tracing Conditionsmentioning
confidence: 99%
“…This "parasite" rule achieved a very high rating because it appeared to be partly responsible for anything good that happened in the system. (Omohundro, 2008) While the two historical examples are mostly interesting as proofs of concept, future AI systems are predicted to be self-modifying and superintelligent (Bostrom, 2006a(Bostrom, , 2006bYampolskiy, 2011;Yampolskiy & Fox, 2012;Yampolskiy, 2013;Yudkowsky, 2008) making preservation of their reward functions (aka utility functions) an issue of critical importance. A number of specific and potentially dangerous scenarios have been discussed regarding wireheading by sufficiently capable machines, they include the following: Direct stimulation.…”
Section: Wireheading In Machinesmentioning
confidence: 99%