2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01166
|View full text |Cite
|
Sign up to set email alerts
|

ChamNet: Towards Efficient Network Design Through Platform-Aware Model Adaptation

Abstract: This paper proposes an efficient neural network (NN) architecture design methodology called Chameleon that honors given resource constraints. Instead of developing new building blocks or using computationally-intensive reinforcement learning algorithms, our approach leverages existing efficient network building blocks and focuses on exploiting hardware traits and adapting computation resources to fit target latency and/or energy constraints. We formulate platform-aware NN architecture search in an optimization… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
210
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 264 publications
(212 citation statements)
references
References 38 publications
1
210
0
1
Order By: Relevance
“…Determining the widths of feature maps in CNNs can be considered as a subset of NAS. Although various approaches have been proposed [9,23,8,45], shrink-andexpand [16,56] is a more suitable approach for object detectors because of its simplicity and scalability. MorphNet [16] shrinks and linearly expands networks.…”
Section: Neural Architecture Search (Nas)mentioning
confidence: 99%
See 1 more Smart Citation
“…Determining the widths of feature maps in CNNs can be considered as a subset of NAS. Although various approaches have been proposed [9,23,8,45], shrink-andexpand [16,56] is a more suitable approach for object detectors because of its simplicity and scalability. MorphNet [16] shrinks and linearly expands networks.…”
Section: Neural Architecture Search (Nas)mentioning
confidence: 99%
“…We only consider MACs and the number of parameters as metrics of model efficiency. We should consider other metrics like memory footprint [64], memory access cost [49], and real latency on target platforms [86,76,81,8].…”
Section: Limitations and Weaknessmentioning
confidence: 99%
“…Besides, automated compact architecture design also provides a promising solution [20], [21]. Dai et al develop efficient performance predictors to speed up the search process for efficient NNs [22]. Compared to Mo-bileNetV2 on the ImageNet dataset, the generated Cham-Nets achieve up to 8.5% absolute top-1 accuracy improvement while reducing inference latency substantially.…”
Section: Efficient Neural Networkmentioning
confidence: 99%
“…Similarly, we profile the models on the same Pixel 1 device. For prior work that does not optimize for Pixel 1, we retrain and profile their model closest to the MnasNet baseline (e.g., the FBNet-B and ChamNet-B networks [15], [16], since the authors use these ConvNets to compare against the MnasNet model). Finally, we directly report the number of epochs reported per method, hence canceling out the effect of different hardware systems (GPU vs. TPU hours).…”
Section: State-of-the-art Runtime-constrained Imagenet Classificationmentioning
confidence: 99%