2020
DOI: 10.48550/arxiv.2006.08573
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Ensemble Search for Uncertainty Estimation and Dataset Shift

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…Ensembling techniques constructed using diverse individual architectures have better generalization ability [30,33,46,47]. To our knowledge, this is the first study to leverage the different robustness properties of specific V1 neuronal populations to create diverse members of an ensemble.…”
Section: Discussionmentioning
confidence: 98%
See 1 more Smart Citation
“…Ensembling techniques constructed using diverse individual architectures have better generalization ability [30,33,46,47]. To our knowledge, this is the first study to leverage the different robustness properties of specific V1 neuronal populations to create diverse members of an ensemble.…”
Section: Discussionmentioning
confidence: 98%
“…The Variants Ensemble not only outperformed all the variants but also performed on par with ResNet18 on clean images (Fig. While diversity in the members of an ensemble has been found to be important to its generalization ability [30,33,46,47], ensembles of networks with the same architecture that only differ in their We also compared Variants Ensemble to a popular defense method that uses Gaussian Noise Training (GNT) as data augmentation [11]. We trained both ResNet18 (ResNet18-GNT) and standard VOneResNet18 (VOneResNet18-GNT) with GNT, observing an increased robustness for noise and blur categories (Fig.…”
Section: Ensemble Of Different Vonenet Variants Eliminates Robustness...mentioning
confidence: 98%
“…Neural Ensemble Search [63] [11] is a similar approach with the key idea of gradual removal of the least promising operations from the supernetwork. Neural Ensemble Search via Sampling (NESS) [47] is supernetbased but does not require the ensemble models to have the same first layers.…”
Section: Search For Architectures Of Neural Ensemblesmentioning
confidence: 99%
“…It has been shown to work well if the models' mistakes are independent [21], which is helped by the models being different from each other [57]. NAS has been used [11,63] to produce models that together make a good ensemble. Modern supernetwork-based approaches seem very fitting to this purpose because they do not incur additional training costs for ensembles of arbitrarily large size (once a supernetwork is trained, weights for trained subnetworks can be extracted from it and used without additional retraining 1 ).…”
Section: Introdutionmentioning
confidence: 99%
“…Although Wenzel et al (2021) attempt to relax this issue by allowing more diversity in the ensemble, they vary just two hyperparameters. Similarly, Zaidi et al (2020) varied the architecture with fixed trainable hyperparameters to increase the ensemble diversity. By constructing diverse DNNs models through a methodical and automated approach, we hypothesize that the assumption of, and the eventual collapse, to one hypothesis can be avoided, thus providing robust and efficient estimates of uncertainty.…”
Section: Introductionmentioning
confidence: 99%