2021
DOI: 10.48550/arxiv.2106.14568
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 27 publications
0
8
0
Order By: Relevance
“…In Fig. 5, Our method achieves over 1% higher performance in its optimal case (q = 0.5) compared to the best-performing case of MIMO (q = 0.0), specifically when using WRN28- We also measured changes in diversity among individual predictions when varying q, since this diversity in ensemble learning is one of the key factors associated with performance [31], [8], [10], [32].…”
Section: B: Experimental Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In Fig. 5, Our method achieves over 1% higher performance in its optimal case (q = 0.5) compared to the best-performing case of MIMO (q = 0.0), specifically when using WRN28- We also measured changes in diversity among individual predictions when varying q, since this diversity in ensemble learning is one of the key factors associated with performance [31], [8], [10], [32].…”
Section: B: Experimental Resultsmentioning
confidence: 99%
“…BatchEnsemble can achieve performance similar to the traditional deep ensemble method with only a small number of additional parameters from the parameterized vectors. Pruning-based approaches decreasing the number of floating-point operations (FLOPs) relative to the traditional deep ensemble method have been also developed [7], [8].…”
Section: B Implicit Deep Ensemblesmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, we would like to note that there is increasing research efforts to develop UQ methods for ML/AI models, which would play important roles in enabling Bayesian UQ paradigm for uncertainty-aware quantication of reproducibility. While detailed presentation of such methods would be outside the scope of this article, we refer interested readers to relevant papers on the uncertainty quantication of ML/AI models, [64][65][66][72][73][74] as well as the references therein.…”
Section: Potential Applications Of Uncertaintyaware Reproducibility M...mentioning
confidence: 99%
“…Although IMP methods [84,2,18,5,46] [24,82,51]. Based on IMP technique, researchers found the existence of LTH in various applications including visual recognition tasks [24], natural language processing [4,7,53], reinforcement learning [69,81], generative model [31], low-cost neural network ensembling [46], and improving robustness [8]. Although LTH has been actively explored in ANN domain, LTH for SNNs is rarely studied.…”
Section: Lottery Ticket Hypothesismentioning
confidence: 99%