2021
DOI: 10.48550/arxiv.2112.03615
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Saliency Diversified Deep Ensemble for Robustness to Adversaries

Abstract: Deep learning models have shown incredible performance on numerous image recognition, classification, and reconstruction tasks. Although very appealing and valuable due to their predictive capabilities, one common threat remains challenging to resolve. A specifically trained attacker can introduce malicious input perturbations to fool the network, thus causing potentially harmful mispredictions. Moreover, these attacks can succeed when the adversary has full access to the target model (white-box) and even when… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…Bogun et al [20] presented an interesting perspective to obtaining better explainability using an ensemble of deep learning models. This ensemble is trained using a regularizer that tries to prevent an adversarial attacker from targeting all ensemble members at once by introducing an additional term in the learning objective.…”
Section: Other Work Relating Adversarial Robustness and Attributionsmentioning
confidence: 99%
“…Bogun et al [20] presented an interesting perspective to obtaining better explainability using an ensemble of deep learning models. This ensemble is trained using a regularizer that tries to prevent an adversarial attacker from targeting all ensemble members at once by introducing an additional term in the learning objective.…”
Section: Other Work Relating Adversarial Robustness and Attributionsmentioning
confidence: 99%