2020
DOI: 10.1038/s41598-020-57866-2
|View full text |Cite
|
Sign up to set email alerts
|

Structural Analysis and Optimization of Convolutional Neural Networks with a Small Sample Size

Abstract: Deep neural networks have gained immense popularity in the Big Data problem; however, the availability of training samples can be relatively limited in specific application domains, particularly medical imaging, and consequently leading to overfitting problems. This "Small Data" challenge may need a mindset that is entirely different from the existing Big Data paradigm. Here, under the small data scenarios, we examined whether the network structure has a substantial influence on the performance and whether the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
66
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 94 publications
(67 citation statements)
references
References 6 publications
1
66
0
Order By: Relevance
“…In machine learning, the pretraining of models is conventionally accomplished on large sample datasets, and large training data ensure outstanding performance; however, this is far from reality, making the approach unsuitable in the medical imaging domain [56]. In the case of small training samples, the domain-specific models trained from scratch can work better [57][58][59][60] relative to transfer learning from a neural network model that has been pre-trained with large training samples in another domain, such as the natural image database of ImageNet. One of the reasons for this is that the gauging from the unprocessed image to the feature vectors used for a particular task, such as classification in the medical case, is sophisticated in the pre-trained case and requires a large training sample for improved generalization [58][59][60].…”
Section: Transfer Learning Approachesmentioning
confidence: 99%
“…In machine learning, the pretraining of models is conventionally accomplished on large sample datasets, and large training data ensure outstanding performance; however, this is far from reality, making the approach unsuitable in the medical imaging domain [56]. In the case of small training samples, the domain-specific models trained from scratch can work better [57][58][59][60] relative to transfer learning from a neural network model that has been pre-trained with large training samples in another domain, such as the natural image database of ImageNet. One of the reasons for this is that the gauging from the unprocessed image to the feature vectors used for a particular task, such as classification in the medical case, is sophisticated in the pre-trained case and requires a large training sample for improved generalization [58][59][60].…”
Section: Transfer Learning Approachesmentioning
confidence: 99%
“…For instance, pretrained networks are used, but assuming a substantial similarity between pre-training and target sets [25][26][27]. Otherwise, some ambiguity may remain in the foolproof nature of the pre-trained network methodology [28]. In the case of MI tasks, there are very few accessible datasets having some differences in implementing the paradigm.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, pre-trained networks are used, but assuming a substantial similarity between pretraining and target sets [24,25,26]. Otherwise, some ambiguity may remain in the foolproof nature of the pre-trained network methodology [27]. In the case of MI tasks, there are very few accessible datasets having some differences in implementing the paradigm.…”
Section: Introductionmentioning
confidence: 99%
“…The agenda of the present paper is as follows: Section 2 describes the collection of MI data used for validation. [32], the spectral range is split into the following bandwidths of interest: ∆f ∈ {µ∈ [8][9][10][11][12], β low ∈ [16][17][18][19][20],β med ∈ [20][21][22][23][24],β high ∈ [24][25][26][27][28]} Hz.…”
Section: Introductionmentioning
confidence: 99%