2019
DOI: 10.48550/arxiv.1908.02729
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Learning with Jacobian Regularization

Abstract: Design of reliable systems must guarantee stability against input perturbations. In machine learning, such guarantee entails preventing overfitting and ensuring robustness of models against corruption of input data. In order to maximize stability, we analyze and develop a computationally efficient implementation of Jacobian regularization that increases classification margins of neural networks. The stabilizing effect of the Jacobian regularizer leads to significant improvements in robustness, as measured agai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
77
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(78 citation statements)
references
References 24 publications
1
77
0
Order By: Relevance
“…These are widely-used image classification datasets, each with 10 classes, whose images are 28 by 28 pixels, and their pixel values range from 0 to 1. For the neural network architecture, we use a modernized version of LeNet-5 [17] as detailed in [12] as it is a commonly used benchmark neural network. We refer to this model as LeNet.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…These are widely-used image classification datasets, each with 10 classes, whose images are 28 by 28 pixels, and their pixel values range from 0 to 1. For the neural network architecture, we use a modernized version of LeNet-5 [17] as detailed in [12] as it is a commonly used benchmark neural network. We refer to this model as LeNet.…”
Section: Methodsmentioning
confidence: 99%
“…To improve the stability of model outputs to small perturbations δ, existing works have proposed regularizing the Frobenius norm [12,13,20] or the Spectral norm [22,24,29] of this data-dependent Jacobian J f (x) for each input. Additionally, [22] show that the input-specific adversarial perturbations align with the dominant singular vectors of these Jacobian matrices.…”
Section: Jacobian Regularizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Subsequent works by [18,26,31,34,75,76] introduce different variants and extensions of mixup. Regularization is also intimately connected to robustness [17,32,48,49,61]. Adding to the list is NFM, a powerful regularization method that we propose to improve model robustness.…”
Section: Related Workmentioning
confidence: 99%
“…The Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) proposed by Qin et al [2020] aims to improve the smoothness of adversarial robustness in order to solve the instability: by training the model to distinguish the training data of varied adversarial robustness and by giving different supervision to the training data, their methods promotes label smoothing [Szegedy et al, 2013] and leads to better calibration and stability. Jakubovitz and Giryes [2018] and Hoffman et al [2019] studied Jacobian regularization to regularize the training loss after the regular training, aiming to provide another way of enhancing robustness other than adversarial training. Mopuri et al [2018], inspired by the architecture of GANs, attempted to capture the distribution of adversarial perturbation; their method exhibited extraordinary fooling rates, variety, and cross model generalizability.…”
Section: Introductionmentioning
confidence: 99%