2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00293
|View full text |Cite
|
Sign up to set email alerts
|

MnasNet: Platform-Aware Neural Architecture Search for Mobile

Abstract: Designing convolutional neural networks (CNN) for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant efforts have been dedicated to design and improve mobile CNNs on all dimensions, it is very difficult to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
1,898
0
7

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 2,538 publications
(1,908 citation statements)
references
References 27 publications
3
1,898
0
7
Order By: Relevance
“…val accuracy). These methods were extended in many ways such as progressive searching [13], parameter sharing [21], network transformation [3], resource-constrained searching [30], and differentiable searching like DARTS [15] and SNAS [34]. Evolutionary algorithm is an alternative to RL.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…val accuracy). These methods were extended in many ways such as progressive searching [13], parameter sharing [21], network transformation [3], resource-constrained searching [30], and differentiable searching like DARTS [15] and SNAS [34]. Evolutionary algorithm is an alternative to RL.…”
Section: Related Workmentioning
confidence: 99%
“…a is a weighted product to approximate the Pareto optimal problem [30] and a is a constant value. We have a = 0 if ζ ≤ o, implying that the complexity constraint is satisfied.…”
Section: Groupable Residual Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Architecture Search Space: For CIFAR-10, we use convolutional architectures as the backbone. For every convolutional layer, we first determine the filter size in [24,36,48,64], the kernel size in [1,3,5,7], and the strides. Two sets of experiments are carried out to determine the strides: (1) by exploring the child networks with a fixed stride of 1; (2) by allowing the controller to predict the strides in [1,2].…”
Section: Experiments Datasetsmentioning
confidence: 99%
“…We concentrate on the architecture as described in table 2. We note that this only scratches the surface of possible architectures, and also potentially provides a useful target for NAS-style approaches [16,14] to find better isometric models. In our experiments, we use isometric networks consisting of Mo-bileNetV3 + SE bottleneck blocks [5].…”
Section: Isometric Architecturesmentioning
confidence: 99%