2019 IEEE International Symposium on Information Theory (ISIT) 2019
DOI: 10.1109/isit.2019.8849757
|View full text |Cite
|
Sign up to set email alerts
|

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

Abstract: We present a simple hypothesis about a compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations. We also propose a new method for detecting when small input perturbations cause classifier errors, and show theoretical guarantees for the performance of this detection method. We present experimental results with a voice recognition system to d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

1
1
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 15 publications
1
1
0
Order By: Relevance
“…Conversely, adversarial attacks can have significantly more dramatic effects when redundant observations of the source variable are not available. This lends support to our recently proposed "feature compression" hypothesis [11,14] as an explanation for the adversarial fragility of deep learning systems. Under this hypothesis, deep learning systems are vulnerable to adversarial attacks because they compress their data into a minimal number of features that contain enough information about the source data to allow for sufficiently accurate classification under no adversarial attacks.…”
Section: Introductionsupporting
confidence: 83%
See 1 more Smart Citation
“…Conversely, adversarial attacks can have significantly more dramatic effects when redundant observations of the source variable are not available. This lends support to our recently proposed "feature compression" hypothesis [11,14] as an explanation for the adversarial fragility of deep learning systems. Under this hypothesis, deep learning systems are vulnerable to adversarial attacks because they compress their data into a minimal number of features that contain enough information about the source data to allow for sufficiently accurate classification under no adversarial attacks.…”
Section: Introductionsupporting
confidence: 83%
“…Deep learning methods have revolutionized many data processing applications that had previously been considered intractable such as computer vision, natural language processing and speech recognition [1][2][3][4][5][6]. However, deep learning systems have been shown to be vulnerable to adversarial attacks [7,[7][8][9][10][11][12][13][14][15][16][17][18][19]. Specifically, it has been shown that the outputs of many deep learning systems can be manipulated with imperceptibly small perturbations applied to the inputs [10,16,[20][21][22][23].…”
Section: Introductionmentioning
confidence: 99%