2018
DOI: 10.1109/access.2018.2810849
|View full text |Cite
|
Sign up to set email alerts
|

Improvement of Generalization Ability of Deep CNN via Implicit Regularization in Two-Stage Training Process

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
128
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 231 publications
(129 citation statements)
references
References 35 publications
1
128
0
Order By: Relevance
“…8 reveals that the location of some landmarks may appear occasionally more accurate in output of the presented methods than in the original so-called ground-truth. This is a well known effect of machine learning methods, and DL methods in particular, which tend to avoid encoding noise and outliers by means of regularization techniques 50,51 . In general, a larger dataset labelled by several experts may limit the impact of the uncertainties in the ground-truth data and reinforce the robustness the DL methods and their resistance to noise.…”
Section: Number Of Epochsmentioning
confidence: 99%
“…8 reveals that the location of some landmarks may appear occasionally more accurate in output of the presented methods than in the original so-called ground-truth. This is a well known effect of machine learning methods, and DL methods in particular, which tend to avoid encoding noise and outliers by means of regularization techniques 50,51 . In general, a larger dataset labelled by several experts may limit the impact of the uncertainties in the ground-truth data and reinforce the robustness the DL methods and their resistance to noise.…”
Section: Number Of Epochsmentioning
confidence: 99%
“…Sun et al [ 25 ] proposed SVD-Net to show that guaranteeing the feature weight of the FC layer can increase the orthogonal constraint of the network and improve the accuracy. Zheng et al [ 26 ] reported that regularization was an efficient method for improving the generalization ability of deep CNN because it makes it possible to train more complex models while maintaining lower overfitting. Zheng et al [ 26 ] proposed a method for optimizing the feature boundary of a deep CNN through a two-stage training step to reduce the overfitting problem.…”
Section: Related Workmentioning
confidence: 99%
“…Zheng et al [ 26 ] reported that regularization was an efficient method for improving the generalization ability of deep CNN because it makes it possible to train more complex models while maintaining lower overfitting. Zheng et al [ 26 ] proposed a method for optimizing the feature boundary of a deep CNN through a two-stage training step to reduce the overfitting problem. However, the mixed features learned from CNN potentially reduce the robustness of network models for identification or classification.…”
Section: Related Workmentioning
confidence: 99%
“…Although a previous study [13] showed that the hyper-parameters and CNN architecture affected the overall accuracy, the CNN architecture of GoogLeNet is suitable for performing classification in many fields owing to its high accuracy; thus, we used fixed parameters. Second, the process of training accuracy and loss were not showed in this study because the ability to generalize was most important for the intended application [18] in the training process; thus, we only focused on the overall accuracy. However, the overfitting would hardly cause problems during training in this study because GoogLeNet adopted the inception module [19] and global average pooling [20] for preventing overfitting.…”
Section: Automatic Trainingmentioning
confidence: 99%