2019
DOI: 10.48550/arxiv.1910.00762
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accelerating Deep Learning by Focusing on the Biggest Losers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(20 citation statements)
references
References 0 publications
0
18
0
Order By: Relevance
“…With 1% labeled data on the CIFAR-10 dataset, the proposed framework achieves 28.36% higher accuracy than using the 1% labeled data for direct supervised learning. The proposed contrast scoring based data selection achieves 13.9% higher accuracy than the SOTA data selection approach [13]. Meanwhile, the proposed approach achieves 2.67x faster learning than the baseline when the same accuracy is achieved.…”
Section: Introductionmentioning
confidence: 88%
See 1 more Smart Citation
“…With 1% labeled data on the CIFAR-10 dataset, the proposed framework achieves 28.36% higher accuracy than using the 1% labeled data for direct supervised learning. The proposed contrast scoring based data selection achieves 13.9% higher accuracy than the SOTA data selection approach [13]. Meanwhile, the proposed approach achieves 2.67x faster learning than the baseline when the same accuracy is achieved.…”
Section: Introductionmentioning
confidence: 88%
“…The next two baselines are SOTA approaches to select data for efficient training and improving accuracy. Selective-Backprop [13] selects data with the largest losses for training. K-Center is a SOTA active learning approach [25], which selects the most representative data by performing k-center clustering in the features space.…”
Section: A Experimental Setupmentioning
confidence: 99%
“…A body of recent work has emphasized making more computationally efficient models (Wu et al, 2019;Coleman et al, 2019;Jiang et al, 2019), yet another line of work has focused on the opposite: building larger models with more parameters to tackle more complex tasks (Amodei and Hernandez, 2018;Sutton, 2019). We suggest leaderboards which utilize carbon emissions and energy metrics to promote an informed balance of performance and efficiency.…”
Section: Energy Efficiency Leaderboardsmentioning
confidence: 99%
“…Alternatively one can re-weight examples according to their loss function when using a stochastic optimizer, which can be used to put more emphasis on "hard" examples [24,29,57]. Re-weighting can also be implicitly enforced via the inclusion of a regularization parameter [1], loss clipping [69], or modelling of crowd-worker qualities [30], which can make the objective more robust to rare instances.…”
Section: Related Workmentioning
confidence: 99%