2020
DOI: 10.1002/hfm.20883
|View full text |Cite
|
Sign up to set email alerts
|

Putting the humanity into inhuman systems: How human factors and ergonomics can be used to manage the risks associated with artificial general intelligence

Abstract: The next generation of artificial intelligence, known as artificial general intelligence (AGI) could either revolutionize or destroy humanity. As the discipline which focuses on enhancing human health and wellbeing, human factors and ergonomics (HFE) has a crucial role to play in the conception, design, and operation of AGI systems. Despite this, there has been little examination as to how HFE can influence and direct this evolution. This study uses a hypothetical AGI system, Tegmark's “Prometheus,” to frame t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
34
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 28 publications
(35 citation statements)
references
References 47 publications
1
34
0
Order By: Relevance
“…Stanton et al, 2019), they do not, at present, pose a significant threat to humanity (Bentley, 2018). This is not the case with AGI, with many scholars discussing potential existential threats (Salmon et al, 2021). The risks associated with AGI are generated by the challenge of controlling an agent that is substantially more intelligent than us (Baum, 2017).…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…Stanton et al, 2019), they do not, at present, pose a significant threat to humanity (Bentley, 2018). This is not the case with AGI, with many scholars discussing potential existential threats (Salmon et al, 2021). The risks associated with AGI are generated by the challenge of controlling an agent that is substantially more intelligent than us (Baum, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…At this point, which is estimated to occur between 2040 to 2070 (Baum et al, 2011;Müller & Bostrom, 2016), it is hypothesised that an AGI will have the capability to recursively self-improve by creating more intelligent versions of itself, as well as altering their preprogrammed goals (Tegmark, 2017). The emergence of AGI could bring about numerous societal challenges, from AGI's replacing the workforce, manipulation of political and military systems, through to the extinction of humans (Bostrom, 2002(Bostrom, , 2014Salmon et al, 2021;Sotala & Yampolskiy, 2015). Given the many known and unknown risks regarding AGI, the scientific community holds concerns regarding the threats that an AGI may have on humanity (Bradley, 2020;Yampolskiy, 2012).…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations