2020
DOI: 10.1186/s13635-020-00106-x
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning through cryptographic glasses: combating adversarial attacks by key-based diversified aggregation

Abstract: In recent years, classification techniques based on deep neural networks (DNN) were widely used in many fields such as computer vision, natural language processing, and self-driving cars. However, the vulnerability of the DNN-based classification systems to adversarial attacks questions their usage in many critical applications. Therefore, the development of robust DNN-based classifiers is a critical point for the future deployment of these methods. Not less important issue is understanding of the mechanisms b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…To randomize: At testing, the classifier depends on a secret key or an alea. This blocks pure white-box attacks [37,38].…”
Section: Defensesmentioning
confidence: 98%
“…To randomize: At testing, the classifier depends on a secret key or an alea. This blocks pure white-box attacks [37,38].…”
Section: Defensesmentioning
confidence: 98%
“…Olga Taran et al propose a novel Key-based Diversified Aggregation method as a defence strategy to handle both grey and black-box adversarial attacks [24]. The randomisation method used in this work prevents backpropagation in gradients, thus confining the adversarial attack to bypass the model.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Prakash et al proposed using a DNN-based adaptive JPEG encoder to preprocess the input [24]. For randomization or private keying, Taran et al proposed a key-based diversified aggregation mechanism to defend against gray-and black-box adversarial attacks [25]. Several adversarial databases have been independently created for evaluation, but guidelines for creating them have not been reported in detail [11], [17], [23].…”
Section: Introductionmentioning
confidence: 99%