2021
DOI: 10.48550/arxiv.2102.07379
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Inherent Regularization Effects of Noise Injection During Training

Oussama Dhifallah,
Yue M. Lu

Abstract: Randomly perturbing networks during the training process is a commonly used approach to improving generalization performance. In this paper, we present a theoretical study of one particular way of random perturbation, which corresponds to injecting artificial noise to the training data. We provide a precise asymptotic characterization of the training and generalization errors of such randomly perturbed learning problems on a random feature model. Our analysis shows that Gaussian noise injection in the training… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 31 publications
(138 reference statements)
0
4
0
Order By: Relevance
“…We also need to analyze the effect of DA with noisy copies on the training process of neural networks. The analysis on the regularization effect of layered and convolutional neural networks may be looked at in [4]. On the other hand, a mixture of apparent acceleration and regularization effect obtained by DA may have a different effect on the training process of neural networks.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…We also need to analyze the effect of DA with noisy copies on the training process of neural networks. The analysis on the regularization effect of layered and convolutional neural networks may be looked at in [4]. On the other hand, a mixture of apparent acceleration and regularization effect obtained by DA may have a different effect on the training process of neural networks.…”
Section: Discussionmentioning
confidence: 99%
“…DA with on-line noisy copies can be viewed as a kind of noise injection into the training process, in which there are methods of injecting noise into inputs, weights and hidden output; e.g. [2], [4], [9], [15], [16]. In this direction, there are considerable works that discuss how to overcome adversarial examples particularly in images; e.g., see [16].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations