2021
DOI: 10.1016/j.neunet.2020.11.015
|View full text |Cite
|
Sign up to set email alerts
|

Greedy auto-augmentation for n-shot learning using deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 18 publications
0
17
0
Order By: Relevance
“…It is an open issue to perform a detailed study of how much the choice of different backgrounds (in a given set) affects the classifier performance. We also observe that the approach of [ 25 , 26 , 27 , 28 ] to automatically derive augmentation technique from data seems suitable to be applied also in the framework considered here.…”
Section: Related Workmentioning
confidence: 77%
See 1 more Smart Citation
“…It is an open issue to perform a detailed study of how much the choice of different backgrounds (in a given set) affects the classifier performance. We also observe that the approach of [ 25 , 26 , 27 , 28 ] to automatically derive augmentation technique from data seems suitable to be applied also in the framework considered here.…”
Section: Related Workmentioning
confidence: 77%
“…Given a set of operators, finding an effective data-augmentation technique based on suitable compositions of such operators may greatly improve the overall classifier predictions. Although in many cases the proposed augmentation techniques are dataset-dependent, several techniques have also been proposed in the literature to learn an effective augmentation technique from the dataset itself, by searching a space of possible augmentation procedures [ 25 , 26 , 27 , 28 ]. In this paper, however, we concentrate on the case of data augmentation with respect to a different characteristic, i.e., image background.…”
Section: Related Workmentioning
confidence: 99%
“…Step 2: Stochastic Gradient Descent for Meta-Model Training Initialize: Randomly initialize the parameters of [23].…”
Section: Statementmentioning
confidence: 99%
“…While big data would allow for training, data scientists may apply newer techniques with fewer data points to mine and transfer them [ 48 ], despite training on limited labeled information in the data [ 49 , 50 ]. Models for ML can be trained with small datasets using few-shot and n-shot approaches [ 51 , 52 ]. Few-shot learning has the potential to help clean and label datasets, as well as generate more data.…”
Section: Data Volumementioning
confidence: 99%