2018 Data Compression Conference 2018
DOI: 10.1109/dcc.2018.00022
|View full text |Cite
|
Sign up to set email alerts
|

Protecting JPEG Images Against Adversarial Attacks

Abstract: As deep neural networks (DNNs) have been integrated into critical systems, several methods to attack these systems have been developed. These adversarial attacks make imperceptible modifications to an image that fool DNN classifiers. We present an adaptive JPEG encoder which defends against many of these attacks. Experimentally, we show that our method produces images with high visual quality while greatly reducing the potency of state-of-the-art attacks. Our algorithm requires only a modest increase in encodi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 24 publications
(23 citation statements)
references
References 13 publications
0
18
0
Order By: Relevance
“…Defenses: Proposed defenses include detection and rejection methods [32,26,55,61,3,63], pre-processing, quantization and dimensionality reduction methods [12,73,7], manifold-projection methods [40,72,82,86], methods based on stochasticity/regularization or adapted architectures [109,7,68,88,35,43,76,45,51,107], ensemble methods [57,94,34,100], as well as adversarial training [109,65,36,83,90,54,62]; however, many defenses have been broken, often by considering "specialized" or novel attacks [13,15,5,6]. In [6], only adversarial training, e.g., the work by Madry et al [62], has been shown to be effective -although many recent defenses have not been studied extensively.…”
Section: Related Workmentioning
confidence: 99%
“…Defenses: Proposed defenses include detection and rejection methods [32,26,55,61,3,63], pre-processing, quantization and dimensionality reduction methods [12,73,7], manifold-projection methods [40,72,82,86], methods based on stochasticity/regularization or adapted architectures [109,7,68,88,35,43,76,45,51,107], ensemble methods [57,94,34,100], as well as adversarial training [109,65,36,83,90,54,62]; however, many defenses have been broken, often by considering "specialized" or novel attacks [13,15,5,6]. In [6], only adversarial training, e.g., the work by Madry et al [62], has been shown to be effective -although many recent defenses have not been studied extensively.…”
Section: Related Workmentioning
confidence: 99%
“…Existing defenses are either reactive [8,16,17,24,36], i.e. adding an extra element to detect or remove adversarial perturbation, or proactive [3,6,11,18,19,22,25,32], i.e.…”
Section: Related Workmentioning
confidence: 99%
“…Transformation-based defenses [8,24,29,34] are reactive defenses that attempt to reform adversarial examples while not changing their semantics. Basic transformations, such as cropping, rescaling, bit-depth reduction, jpeg compression, total variance minimization, and image quilting, succeed in removing adversarial eects to some extent [8].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…nevertheless, a new sample can always be found to deceive the network however we repeat the first idea [24]. Even though we use other methods such as data compression [25] and data randomization [26], The new adversarial samples that keep appearing make these defense methods ridiculous.…”
Section: B Defense Patterns For Adversarial Samplementioning
confidence: 99%