Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 2020
DOI: 10.1145/3375627.3375815
|View full text |Cite
|
Sign up to set email alerts
|

The Offense-Defense Balance of Scientific Knowledge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(26 citation statements)
references
References 7 publications
0
20
0
Order By: Relevance
“…Sophisticated or institutional actors with the capacity to embark on large-scale disinformation, cyberwarfare, or targeted phishing also are likely to have the capacity to create a similar model if none were released. Although potentially significant, these harms should not therefore weight heavily on a release calculus [Solaiman et al 2019;Shevlane and Dafoe 2020]. The harms to be weighed against the benefits are those from less well-resourced actors who would not be able to create their own foundation model but may be motivated to generate spam or abuse, fake reviews, or cheat on tests.…”
Section: Release and Auditingmentioning
confidence: 99%
“…Sophisticated or institutional actors with the capacity to embark on large-scale disinformation, cyberwarfare, or targeted phishing also are likely to have the capacity to create a similar model if none were released. Although potentially significant, these harms should not therefore weight heavily on a release calculus [Solaiman et al 2019;Shevlane and Dafoe 2020]. The harms to be weighed against the benefits are those from less well-resourced actors who would not be able to create their own foundation model but may be motivated to generate spam or abuse, fake reviews, or cheat on tests.…”
Section: Release and Auditingmentioning
confidence: 99%
“…Researchers should weigh national security and societal harm in decisions about how widely to publicize findings, and they should build mitigations prior to release. 214 The "do no harm" principle should weigh heavily in the decision of how much of the model, code, and tutorial to release publicly. 215…”
Section: Build and Apply Ethical Principles For The Publication Of Ai Research That Can Fuel Disinformation Campaignsmentioning
confidence: 99%
“…Many if not most of the AI capabilities described above are -or derive from -dual-use capabilities that are innocuous or beneficial in other applications. Moreover, the culture of AI is characterized by a high degree of openness, and even in cases where the source code is not already openly shared, many new AI algorithms can be independently reproduced by other researchers in a matter of months, making for a low barrier to proliferation (Brundage et al, 2018: 17;Shevlane and Dafoe, 2020). On the supply-side, AI tools, especially pre-trained versions, are as accessible as any software; on the demand-side, many of these tools offer extensions on, or improvements over, the precise sort of criminal capabilities or technologies which (cyber)criminals have long sought to acquire, whether in terms of pursuing 'zero-day exploits', or through tools such as Blackshades.…”
Section: Aic: Estimating the Threatmentioning
confidence: 99%