2018
DOI: 10.3390/sym10110648
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Regularization Techniques in Deep Neural Networks

Abstract: Artificial neural networks (ANN) have attracted significant attention from researchers because many complex problems can be solved by training them. If enough data are provided during the training process, ANNs are capable of achieving good performance results. However, if training data are not enough, the predefined neural network model suffers from overfitting and underfitting problems. To solve these problems, several regularization techniques have been devised and widely applied to applications and data an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
54
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 92 publications
(55 citation statements)
references
References 42 publications
(34 reference statements)
1
54
0
Order By: Relevance
“…If the data contain highly correlated features, then the machine learning algorithms, in general, perform poorly. Regularization techniques are used to overcome the issues of overfitting whereas underfitting would require the acquisition of more data and that is not an issue in the case of big data [32].…”
Section: A Data Acquisitionmentioning
confidence: 99%
“…If the data contain highly correlated features, then the machine learning algorithms, in general, perform poorly. Regularization techniques are used to overcome the issues of overfitting whereas underfitting would require the acquisition of more data and that is not an issue in the case of big data [32].…”
Section: A Data Acquisitionmentioning
confidence: 99%
“…The technical reasons for SCL picking both CNN and LSTM to defend against LFA in IoT networks are twofold. First, the conventional deep learning approaches, which apply a single constituent learning algorithm, often encounter overfitting problems [ 31 , 32 , 33 ]. Overfitting is when the model is trained exactly to a particular set of data and unwittingly extracts variation (i.e., the noise), as if that variation represents underlying population structure; in such cases, the model may therefore fail to fit additional data or predict future observations.…”
Section: Introductionmentioning
confidence: 99%
“…The technical reasons for SCL picking both CNN and LSTM to defend against LFA in IoT networks are twofold. First, the conventional deep learning approaches, which apply a single constituent learning algorithm, often encounter overfitting problems [31][32][33].…”
Section: Introductionmentioning
confidence: 99%
“…Overfittingis a term used when the network model functions extremely good with the training data, but could not work well with the test data. In the overfitted network validation error goes up while the training error comes down [21].…”
Section: Introductionmentioning
confidence: 99%