2021
DOI: 10.1007/s10462-021-09975-1
|View full text |Cite
|
Sign up to set email alerts
|

A systematic review on overfitting control in shallow and deep neural networks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
104
0
3

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 261 publications
(108 citation statements)
references
References 138 publications
1
104
0
3
Order By: Relevance
“…We can also observe that GL obtains a greater classification accuracy improvement (19.2%) over the other methods on all datasets. Neural networks have a high capacity to overfit the untrustworthy small tabular data [18], [57], which causes poor generalization ability on test in-distribution samples. GL alleviates this problem by adaptively applying ground-truth labels and complementary labels for in-and out-of-distribution samples, respectively, where the complementary labels decrease the risk of providing incorrect label information.…”
Section: Comparison Resultsmentioning
confidence: 99%
“…We can also observe that GL obtains a greater classification accuracy improvement (19.2%) over the other methods on all datasets. Neural networks have a high capacity to overfit the untrustworthy small tabular data [18], [57], which causes poor generalization ability on test in-distribution samples. GL alleviates this problem by adaptively applying ground-truth labels and complementary labels for in-and out-of-distribution samples, respectively, where the complementary labels decrease the risk of providing incorrect label information.…”
Section: Comparison Resultsmentioning
confidence: 99%
“…The feedforward model architecture used was limited in the number of hidden layers and neurons per hidden layer to allow for the execution of a variety of different model evaluations and the time required to train one model per evaluation. Further, more complex models might have run the risk of overfitting due to a larger number of trainable hyperparameters [43].…”
Section: Feedforward Neural Networkmentioning
confidence: 99%
“…A model with too little capacity cannot learn the given problem, whereas a model with too much capacity can learn it too well and overfit the training dataset; thus, such a model does not have the ability to generalize the knowledge well [ 38 ]. Thus, the selection of a model size that maximizes generalization is an important topic that has been given a lot of attention in the last years as the field of cognitive neuroscience develops [ 39 ].…”
Section: Introductionmentioning
confidence: 99%