2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00066
|View full text |Cite
|
Sign up to set email alerts
|

Learning Channel-Wise Interactions for Binary Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 84 publications
(15 citation statements)
references
References 24 publications
0
15
0
Order By: Relevance
“…Top-5 Acc(%) CI-BCNN (Wang et al, 2019) One step 59.9 84.2 Binary MobileNet (Phan et al, 2020b) 60.9 82.6 MoBiNet (Phan et al, 2020a) 54.4 77.5 EL (Hu et al, 2022) 56.4 -MeliusNet29 (Bethge et al, 2020) 65.8 -ReActNet-A + Our filtering optimizer 69.7 88.9…”
Section: Methods Training Strategymentioning
confidence: 99%
“…Top-5 Acc(%) CI-BCNN (Wang et al, 2019) One step 59.9 84.2 Binary MobileNet (Phan et al, 2020b) 60.9 82.6 MoBiNet (Phan et al, 2020a) 54.4 77.5 EL (Hu et al, 2022) 56.4 -MeliusNet29 (Bethge et al, 2020) 65.8 -ReActNet-A + Our filtering optimizer 69.7 88.9…”
Section: Methods Training Strategymentioning
confidence: 99%
“…The knowledge transfer is learning the class distribution output via the loss function. Therefore, the BNN can be under the supervision of a real-value model to improve the learning capability to gain higher accuracy that is closed to the real-valued model like CI-BCNN [69]. CI-BCNN extracts the channelwise interactions from the prior knowledge to decrease the inconsistency of signs in binary feature maps and keeps the information of input samples during inference.…”
Section: ) Accuracy Optimization Approachesmentioning
confidence: 99%
“…Unlike optimizing the binary process directly at the convolution layer, LAB2 [4] directly considers the binary loss and applies the near-end Newton algorithm to the binary weights. CI-BCNN [5], through learning to strengthen graph models, mining channel-level interaction, and iterating pop count, reduces symbol inconsistency in binary feature graphs and retains the input sample information. LNS [6] proposes to predict binary weight by monitoring noise learning and training binary functions.…”
Section: Related Workmentioning
confidence: 99%