2020
DOI: 10.48550/arxiv.2008.03522
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unravelling Small Sample Size Problems in the Deep Learning World

Abstract: The growth and success of deep learning approaches can be attributed to two major factors: availability of hardware resources and availability of large number of training samples. For problems with large training databases, deep learning models have achieved superlative performances. However, there are a lot of small sample size or S 3 problems for which it is not feasible to collect large training databases. It has been observed that deep learning models do not generalize well on S 3 problems and specialized … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 77 publications
(99 reference statements)
0
1
0
Order By: Relevance
“…Finally, we offered a set of practical recommendations about how RS scientists can better implement DL techniques to fully take advantage of a small dataset. As one previous paper noted (Keshari et al, 2020) a variety of approaches can be used to solve the small data problem, such as data augmentation, data fine-tuning, the adaptation of pre-trained models, and reducing the dependence on large-sample learning. However, in our review, we also presented even more techniques that are worth considering when working with a small dataset.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, we offered a set of practical recommendations about how RS scientists can better implement DL techniques to fully take advantage of a small dataset. As one previous paper noted (Keshari et al, 2020) a variety of approaches can be used to solve the small data problem, such as data augmentation, data fine-tuning, the adaptation of pre-trained models, and reducing the dependence on large-sample learning. However, in our review, we also presented even more techniques that are worth considering when working with a small dataset.…”
Section: Discussionmentioning
confidence: 99%