2016
DOI: 10.1007/978-3-319-26485-1_33
|View full text |Cite
|
Sign up to set email alerts
|

Future Progress in Artificial Intelligence: A Survey of Expert Opinion

Abstract: There is, in some quarters, concern about high-level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high-level machine intelligence coming up within a particular time-frame, which risks they see with that development, and how fast they see the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
249
0
17

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 426 publications
(270 citation statements)
references
References 10 publications
4
249
0
17
Order By: Relevance
“…A survey of a number of conferences including the top 100 authors on AI looked at diff erent levels of AI development and asked the participants to select the year they anticipated high level machine intelligence (HLMI), which was defi ned as a machine capable of doing the same job as a qualifi ed human, including professional positions and using a scale of 5,000 years, predict both HLMI and superintelligence, or where machines greatly surpass human in- telligence. Th e results of the study showed a convergence of opinion around 2040 with dates as early as 2020 for HLMI and superintelligence likely to follow between two and thirty years aft er (Müller & Bostrom, 2016). While the media oft en portrays a scary image of AI it is important not to fan the fl ames of hysteria of science fi ction but rather to take a pragmatic view that we can accomplish great steps forward with technology and while we may face disruption of industries and markets as we develop the new world with basic standard operating environment, many of the concerns can be eliminated (Kaplan, 2017).…”
Section: Artifi Cial Intelligencementioning
confidence: 79%
See 1 more Smart Citation
“…A survey of a number of conferences including the top 100 authors on AI looked at diff erent levels of AI development and asked the participants to select the year they anticipated high level machine intelligence (HLMI), which was defi ned as a machine capable of doing the same job as a qualifi ed human, including professional positions and using a scale of 5,000 years, predict both HLMI and superintelligence, or where machines greatly surpass human in- telligence. Th e results of the study showed a convergence of opinion around 2040 with dates as early as 2020 for HLMI and superintelligence likely to follow between two and thirty years aft er (Müller & Bostrom, 2016). While the media oft en portrays a scary image of AI it is important not to fan the fl ames of hysteria of science fi ction but rather to take a pragmatic view that we can accomplish great steps forward with technology and while we may face disruption of industries and markets as we develop the new world with basic standard operating environment, many of the concerns can be eliminated (Kaplan, 2017).…”
Section: Artifi Cial Intelligencementioning
confidence: 79%
“…Disruptive innovation and disruptive technology are now part of the business vocabulary and we have only witnessed the start of the disruptive wave that is quickly moving towards the business world. Th e timelines for technology change have moved from linear to exponential, and experts are forecasting that what happens during the next decade will eclipse what has occurred over the last century in its level of profound change (Diamandis & Kotler, 2016;Müller & Bostrom, 2016). Yu & Hang (2010) highlighted that more research was needed to address disruptive technologies as these were likely to be the main drivers for disruptive innovation.…”
Section: Christensen's Disruptive Innovation Theorymentioning
confidence: 99%
“…Given the many recent warnings about AI, Müller and Bostrom (2016) collected opinions from researchers in the field, including highly cited experts, to get their view on the future. 170 responses out of 549 invitations were collected.…”
Section: The Future Potential Of Robotics and Aimentioning
confidence: 99%
“…Economic history abounds with examples of periods of explosive growth, and 90% of experts (primarily representing computer science) surveyed from a number of polls conclude that man-made systems will reach human-level general intelligence by 2075, with 75% of researchers concluding that these systems will progress to superintelligence within 30 years of attaining human-level intelligence. These figures, however, are conservative estimates; although he does not offer a conclusive date, Bostom himself argues that superintelligence will likely arrive in a few decades (Müller & Bostrom, 2014) and that the transition from human-level intelligence to superintelligence will be explosively fast (perhaps within hours or minutes). Next, he delineates various possible paths to superintelligence and notes that the existence of multiple possible paths increases the plausibility of eventually developing superintelligence.…”
Section: Book Reviewmentioning
confidence: 99%