2021
DOI: 10.1609/aaai.v35i12.17237
|View full text |Cite
|
Sign up to set email alerts
|

Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients

Abstract: Adversarial attacks by generating examples which are almost indistinguishable from natural examples, pose a serious threat to learning models. Defending against adversarial attacks is a critical element for a reliable learning system. Support vector machine (SVM) is a classical yet still important learning algorithm even in the current deep learning era. Although a wide range of researches have been done in recent years to improve the adversarial robustness of learning models, but most of them are limited to d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…The method performs an attack by minimizing the distance between the decision boundary and benign examples. A zeroth-order optimization algorithm is used with a randomized gradient-free method to minimize such distance and formulate the attack [ 28 , 41 , 42 ]. Table 3 shows the attack parameters of the benchmark ZOO for all datasets.…”
Section: A Proposed Robust Ensemble Adversarial Machine Learning Fram...mentioning
confidence: 99%
“…The method performs an attack by minimizing the distance between the decision boundary and benign examples. A zeroth-order optimization algorithm is used with a randomized gradient-free method to minimize such distance and formulate the attack [ 28 , 41 , 42 ]. Table 3 shows the attack parameters of the benchmark ZOO for all datasets.…”
Section: A Proposed Robust Ensemble Adversarial Machine Learning Fram...mentioning
confidence: 99%