2021
DOI: 10.1016/j.neunet.2021.02.007
|View full text |Cite
|
Sign up to set email alerts
|

Self-augmentation: Generalizing deep networks to unseen classes for few-shot learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(8 citation statements)
references
References 28 publications
0
8
0
Order By: Relevance
“…Unfortunately, these algorithms traditionally rely on large datasets. Recent advances in few-shot learning and data augmentation methods allow for deep learning approaches using small datasets—an excellent fit for the relative paucity of validated ototoxins in our training dataset ( 163 – 165 ). Another option is to use transcriptomic data for model fitting, either in lieu of or in addition to molecular fingerprints.…”
Section: Models For Ototoxicity Studiesmentioning
confidence: 99%
“…Unfortunately, these algorithms traditionally rely on large datasets. Recent advances in few-shot learning and data augmentation methods allow for deep learning approaches using small datasets—an excellent fit for the relative paucity of validated ototoxins in our training dataset ( 163 – 165 ). Another option is to use transcriptomic data for model fitting, either in lieu of or in addition to molecular fingerprints.…”
Section: Models For Ototoxicity Studiesmentioning
confidence: 99%
“…NeuNet since neural network has the advantage of self-learning [13]. This work focused on how to set the optimization goals in NeuNet .…”
Section: Dynamic Aggregationmentioning
confidence: 99%
“…We adapted a couple of augmentation techniques such as CutMix [70], where image patches are cut and pasted among training images and the ground truth labels are also mixed proportionally within the area of the patches. Mixup [50], a technique that generates convex combinations of pairs of examples and their labels, which proved to be effective for support and query augmentation strategies. As well as Self-Mix [71] in which an image is substituted into other regions in the same image.…”
Section: Data Augmentation Strategymentioning
confidence: 99%
“…We investigated three test cases that check the training performance when data is sampled from the support, query and task data, respectively. Our approach is similar to techniques adapted by [41,30,50] that examine the impact of augmentation on a diverse set of data combinations.…”
Section: Augmentation Performancementioning
confidence: 99%