2021
DOI: 10.48550/arxiv.2102.08946
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration

Abstract: Previous studies dominantly target at self-supervised learning on real-valued networks and have achieved many promising results. However, on the more challenging binary neural networks (BNNs), this task has not yet been fully explored in the community. In this paper, we focus on this more difficult scenario: learning networks where both weights and activations are binary, meanwhile, without any human annotated labels. We observe that the commonly used contrastive objective is not satisfying on BNNs for competi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Another approach is the SqueezeNet [28], which is designed as a small network with 1 × 1 convolutional filters. Yet another approach is to use binary weights in neural networks [29][30][31][32][33][34][35][36][37][38][39][40][41][42]. Since the weights of the neurons are binary, they can can be used to slim and accelerate networks in specialized hardware including compute-in-memory systems [43].…”
Section: Related Workmentioning
confidence: 99%
“…Another approach is the SqueezeNet [28], which is designed as a small network with 1 × 1 convolutional filters. Yet another approach is to use binary weights in neural networks [29][30][31][32][33][34][35][36][37][38][39][40][41][42]. Since the weights of the neurons are binary, they can can be used to slim and accelerate networks in specialized hardware including compute-in-memory systems [43].…”
Section: Related Workmentioning
confidence: 99%
“…In [27] the neural network is slimmed by removing some layers from a well-developed model. Binary neural networks [28,29,30,31,32,33,34,35,36,37] showed that binary weights can be used to slim and accelerate neural networks.…”
Section: Introductionmentioning
confidence: 99%