2021
DOI: 10.51593/2021ca007
|View full text |Cite
|
Sign up to set email alerts
|

"Making AI Work for Cyber Defense: The Accuracy-Robustness Tradeoff "

Abstract: Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…-(A.6.2) Automated red agents must be provided with a wide variety of cyber attacks (specified within the MITRE ATT&CK framework) -(A.6.3) along with a variety of algorithmic attacks [59] to address systems vulnerabilities.…”
Section: Resiliencementioning
confidence: 99%
See 1 more Smart Citation
“…-(A.6.2) Automated red agents must be provided with a wide variety of cyber attacks (specified within the MITRE ATT&CK framework) -(A.6.3) along with a variety of algorithmic attacks [59] to address systems vulnerabilities.…”
Section: Resiliencementioning
confidence: 99%
“…8.1.9 Impact of Incorrect Action (G.6.1, G.1.3) [41]. The above issue also leads on a gap within the ACD literature for automated decision-making agents.…”
Section: Explainable Rl (A24)mentioning
confidence: 99%
“…This can not only invalidate testing and verification, but also make the system potentially vulnerable to a patient attacker that gradually tweaks normal behavior to avoid appearing anomalous. 28 Third, fixing ML vulnerabilities often creates other problems. 29 To address vulnerabilities in ML systems, developers have to retrain the system so that it is no longer susceptible to that deception.…”
Section: Whilementioning
confidence: 99%
“…Instead, they tend to make trade-offs, making the system perform better under one set of conditions but potentially worse in others. 35 This is a problem in adversarial contexts; an attacker can adapt its behavior to exploit the lingering weaknesses of the system. These persistent problems with safety and security raise the question of whether decision makers will trust applications of ML for decision advantage.…”
Section: Whilementioning
confidence: 99%
“…Primarily for security tasks, such as NIDSs, robustness is the main concern for trustworthy real-world ML applications [6]. The considerable demand for robustness partially constrains the real-world implementation of ML-based NIDSs [7]. On one hand, research on the reliability and trustworthiness of ML-based NIDSs is still in the early stage [8,9].…”
Section: Introductionmentioning
confidence: 99%