2020
DOI: 10.48550/arxiv.2001.10422
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search

Abstract: One-shot neural architecture search (NAS) has played a crucial role in making NAS methods computationally feasible in practice. Nevertheless, there is still a lack of understanding on how these weight-sharing algorithms exactly work due to the many factors controlling the dynamics of the process. In order to allow a scientific study of these components, we introduce a general framework for one-shot NAS that can be instantiated to many recently-introduced variants and introduce a general benchmarking framework … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(14 citation statements)
references
References 3 publications
0
14
0
Order By: Relevance
“…The existing NAS methods could be summarized as these three categories: 1) Reinforcement learning based methods [21], [22]; 2) Evolution algorithm based methods [23], [23], [24]; and 3) Gradient based methods [25], [26], [27], [28], [29]. Recently, several NAS benchmarks for generic object classification task such as NAS-Bench-101 [30], NAS-Bench-201 [31], and NAS-Bench-1Shot1 [32] as well as evaluation manner [33] are proposed for fair performance comparison. In other side, in order to quickly adapt and discover excellent architectures in the unseen scenarios, some meta NAS based methods [34], [35], [36], [37], [38] are developed.…”
Section: Related Workmentioning
confidence: 99%
“…The existing NAS methods could be summarized as these three categories: 1) Reinforcement learning based methods [21], [22]; 2) Evolution algorithm based methods [23], [23], [24]; and 3) Gradient based methods [25], [26], [27], [28], [29]. Recently, several NAS benchmarks for generic object classification task such as NAS-Bench-101 [30], NAS-Bench-201 [31], and NAS-Bench-1Shot1 [32] as well as evaluation manner [33] are proposed for fair performance comparison. In other side, in order to quickly adapt and discover excellent architectures in the unseen scenarios, some meta NAS based methods [34], [35], [36], [37], [38] are developed.…”
Section: Related Workmentioning
confidence: 99%
“…In this series of experiments, we evaluate DEHB on a broad range of NAS benchmarks. We use a total of 13 tabular benchmarks from NAS-Bench-101 [Ying et al, 2019], NAS-Bench-1shot1 [Zela et al, 2020], NAS-Bench-201 [Dong and Yang, 2020] and NAS-HPO-Bench [Klein and Hutter, 2019]. For NAS-Bench-101, we show results on CifarC (a mixed data type encoding of the parameter space [Awad et al, 2020]) in Figure 10; BOHB and DEHB initially perform similarly as RS for this dataset, since there is only little correlation between runs with few epochs (low budgets) and many epochs (high budgets) in NAS-Bench-101.…”
Section: Nas Benchmarksmentioning
confidence: 99%
“…NAS-Bench-1shot1 was introduced by [Zela et al, 2020], as a benchmark derived from the large space of architectures offered by NAS-Bench-101. This benchmark allows the use of modern one-shot 8 NAS methods with weight sharing ( [Pham et al, 2018], [Liu et al, 2018]).…”
Section: Nas-bench-1shot1mentioning
confidence: 99%
“…There are several important directions that we intend to explore in the future, including (i) simplify the adjacency matrix P to capture the dependency and mutual interaction between synaptic connections, e.g., approximate gradients using local information (Jaderberg et al, 2017), (ii) extend the proposed framework to NAS benchmarks (Ying et al, 2019;Dong & Yang, 2020;Dong et al, 2021;Zela et al, 2020; to select the best subnetwork, and (iii) design an efficient algorithm to directly optimize NN architectures based on β eff .…”
Section: Conclusion and Discussionmentioning
confidence: 99%