2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00124
|View full text |Cite
|
Sign up to set email alerts
|

Feedback Learning for Improving the Robustness of Neural Networks

Abstract: Recent research studies revealed that neural networks are vulnerable to adversarial attacks. State-of-the-art defensive techniques add various adversarial examples in training to improve models' adversarial robustness. However, these methods are not universal and can't defend unknown or nonadversarial evasion attacks. In this paper, we analyze the model robustness in the decision space. A feedback learning method is then proposed, to understand how well a model learns and to facilitate the retraining process o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…Later, [43] finds that quantized DNNs are actually more vulnerable to adversarial attacks due to the error amplification effect, i.e., the magnitude of adversarial perturbation is amplified when passing through the DNN layers. To tackle this effect, [43,68] propose robustness-aware regularization methods for DNN training, and [69] retrains the network via feedback learning [70]. In addition, [55] searches for layerwise precision and [26] constructs a unified formulation to balance and enforce the models' robustness and compactness.…”
Section: Related Workmentioning
confidence: 99%
“…Later, [43] finds that quantized DNNs are actually more vulnerable to adversarial attacks due to the error amplification effect, i.e., the magnitude of adversarial perturbation is amplified when passing through the DNN layers. To tackle this effect, [43,68] propose robustness-aware regularization methods for DNN training, and [69] retrains the network via feedback learning [70]. In addition, [55] searches for layerwise precision and [26] constructs a unified formulation to balance and enforce the models' robustness and compactness.…”
Section: Related Workmentioning
confidence: 99%
“…Decision Space: In [4], authors study the robustness of deep neural networks for the image domain with respect to unrestricted evasion attacks. Specifically, they analyze the model's behavior concerning class margins and improve the model's robustness by increasing the proportion of samples in vulnerable classes.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, it should be noted that there may be classes with overlapping or similar distributions that make the boundary shifting difficult. Instead of simply retraining the model using a portion of or the entire crafted examples [4], here, we propose a distance-based metric for selecting valid data points. It is a reliable metric as it helps us to fine-tune the behavior of a specific class without interfering with the performance of the adjacent classes.…”
Section: Robustness Improvement Using Cementioning
confidence: 99%
See 1 more Smart Citation
“…is adversarially-trained by Madry et al [22], F.L. is retrained from the original model using feedback learning, as proposed in Song et al [32]. Although methods like feedback learning can minimize the quantization loss, the most effective and efficient way to improve adversarial robustness is still adversarial training.…”
Section: Adversarial Training and Quantizationmentioning
confidence: 99%