2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00256
|View full text |Cite
|
Sign up to set email alerts
|

Searching for Accurate Binary Neural Architectures

Abstract: Binary neural networks have attracted tremendous attention due to the efficiency for deploying them on mobile devices. Since the weak expression ability of binary weights and features, their accuracy is usually much lower than that of full-precision (i.e. 32-bit) models. Here we present a new frame work for automatically searching for compact but accurate binary neural networks. In practice, number of channels in each layer will be encoded into the search space and optimized using the evolutionary algorithm. E… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 58 publications
(32 citation statements)
references
References 16 publications
0
32
0
Order By: Relevance
“…Very recent work on binary NAS done concurrently with our work include [37] and [38]. As opposed to our work, [37] simple searches for the number of channels in each layer inside a ResNet. [38] uses a completely different search space and training strategy.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Very recent work on binary NAS done concurrently with our work include [37] and [38]. As opposed to our work, [37] simple searches for the number of channels in each layer inside a ResNet. [38] uses a completely different search space and training strategy.…”
Section: Related Workmentioning
confidence: 99%
“…On ImageNet, using 1.55 × 10 8 FLOPs, we obtain a Top-1 accuracy of 66.1% vs 58.76% using 1.63 × 10 8 FLOPS in [38]. On CIFAR-10 we score a top-1 accuracy of 96.1% vs 94.43% in [37]. While [37] achieves an accuracy of 69.65% on ImageNet, they use 6.6 × 10 8 FLOPS (4.2× more than our biggest model).…”
Section: Related Workmentioning
confidence: 99%
“…Shen et al [37] encode the number of channels in each layer into search space and optimize using the evolutionary algorithm. This approach can identify binary neural architectures for obtaining high precision with as few computation costs as possible.…”
Section: F Neural Architecture Searchmentioning
confidence: 99%
“…However, the accuracy of QNNs is still worse than the full-precision versions. The second approach improves the performance of QNNs by expanding the channel number of the convolutional layers [17,18]. Shen et al [18] applied this idea to 1-bit QNNs by encoding the channel expansion rates of all convolutional layers and utilizing the evolutionary algorithms (EAs) to search for the expansion policy.…”
Section: Introductionmentioning
confidence: 99%
“…The second approach improves the performance of QNNs by expanding the channel number of the convolutional layers [17,18]. Shen et al [18] applied this idea to 1-bit QNNs by encoding the channel expansion rates of all convolutional layers and utilizing the evolutionary algorithms (EAs) to search for the expansion policy. Experiments show that their method can build QNNs with acceptable parameters increment and the accuracy close to the full-precision counterparts.…”
Section: Introductionmentioning
confidence: 99%