2021
DOI: 10.1371/journal.pone.0244592
|View full text |Cite
|
Sign up to set email alerts
|

Mediating artificial intelligence developments through negative and positive incentives

Abstract: The field of Artificial Intelligence (AI) is going through a period of great expectations, introducing a certain level of anxiety in research, business and also policy. This anxiety is further energised by an AI race narrative that makes people believe they might be missing out. Whether real or not, a belief in this narrative may be detrimental as some stake-holders will feel obliged to cut corners on safety precautions, or ignore societal consequences just to “win”. Starting from a baseline model that describ… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 31 publications
(34 citation statements)
references
References 60 publications
0
34
0
Order By: Relevance
“…The DSAIR model and associated analysis provides thus an instrument for researchers interested in AI regulation and policy making to think about the supporting mechanisms (such as suitable rewards and sanctions) (Sigmund, 2010;Sotala & Yampolskiy, 2014;Szolnoki & Perc, 2013;Han, Pereira, & Lenaerts, 2015, 2019Vinuesa et al, 2020) needed to mediate a given race; for preliminary results, see our recent work in (Han et al, 2020). In the early DSAI, controlling the development speed of AI teams appears essential.…”
Section: Discussionmentioning
confidence: 99%
“…The DSAIR model and associated analysis provides thus an instrument for researchers interested in AI regulation and policy making to think about the supporting mechanisms (such as suitable rewards and sanctions) (Sigmund, 2010;Sotala & Yampolskiy, 2014;Szolnoki & Perc, 2013;Han, Pereira, & Lenaerts, 2015, 2019Vinuesa et al, 2020) needed to mediate a given race; for preliminary results, see our recent work in (Han et al, 2020). In the early DSAI, controlling the development speed of AI teams appears essential.…”
Section: Discussionmentioning
confidence: 99%
“…Indeed, we explicitly tested in our simulations what would happen if always companies that take risks are sanctioned (Han et al, 2021b), reducing their speed but at the cost of speed reduction by the sanctioning party. As anticipated, over-regulation, conducive to beneficial innovation being stifled, occurred whenever the gain from speeding up outbenefited the taking of risk.…”
Section: Lessons For Ai Governance Policiesmentioning
confidence: 99%
“…Here, we summarise our previous works (Han et al, 2020(Han et al, , 2021b examining this problem theoretically, resorting to a novel innovation dilemma where technologists can choose a safe (SAFE) vs risk-taking (UNSAFE) course of development. Companies race towards the deployment of some AI -based product in some domain X.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Institutional enforcement mechanisms are crucial for enabling large-scale cooperation. Most modern societies implemented different forms of institutions for governing and promoting collective behaviors, including cooperation, coordination, and technology innovation [Ostrom, 1990, Bowles, 2009, Bowles and Gintis, 2002, Bardhan, 2005, Han et al, 2021, Scotchmer, 2004].…”
Section: Introductionmentioning
confidence: 99%