2020
DOI: 10.1109/access.2020.3029576
|View full text |Cite
|
Sign up to set email alerts
|

Training Hardware for Binarized Convolutional Neural Network Based on CMOS Invertible Logic

Abstract: In this paper, we implement fast and power-efficient training hardware for convolutional neural networks (CNNs) based on CMOS invertible logic. The backpropagation algorithm is generally hard to implement in hardware because it requires high-precision floating-point arithmetic. Even though parameters of CNNs can be represented by fixed points or even binary during inference, it is still represented by floating points during training. Our hardware uses low-precision data representation for both inference and tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

4
3

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 24 publications
0
12
0
Order By: Relevance
“…In conventional works, dense random signals (r(t)) are used, such as uniform random signals of [-1:+1] [1], [5] or binary random signals of [-1,+1] [3], [4], [6] shown in Fig. 2 (b) and (c).…”
Section: Spin-state Update With Dense Random Signalmentioning
confidence: 99%
See 1 more Smart Citation
“…In conventional works, dense random signals (r(t)) are used, such as uniform random signals of [-1:+1] [1], [5] or binary random signals of [-1,+1] [3], [4], [6] shown in Fig. 2 (b) and (c).…”
Section: Spin-state Update With Dense Random Signalmentioning
confidence: 99%
“…The bidirectional computing capability is realized by reducing the network energy to the global minimum energy with noise induced by random signals (e.g., a multiplier can be used as a factorizer in the backward mode). Due to the unique feature, several challenging problems can be quickly solved, such as integer factorization (e.g., cryptography problems [1]) and machine learning (e.g., training neural networks [3], [4]).…”
Section: Introductionmentioning
confidence: 99%
“…The spin-gate circuit has been presented [9] for a fullyparallel architecture with a fixed Hamiltonian. Each spin contains a different number of inputs from other spins, where the Hamiltonian coefficients are hardwired for a dedicated application [5]- [7]. The fully-parallel manner restricts the invertible-logic hardware to small-scale Hamiltonians due to area limitations on application specific integrated circuit (ASIC) or field-programmable gate array (FPGA).…”
Section: B Hardware Implementation Of Cmos Invertible Logic Using Stochastic Computingmentioning
confidence: 99%
“…By reducing the network energy using random signals, the bidirectional operation can be realized probabilistically. The unique feature of the bidirectional computing can be applied for solving several critical issues, such as integer factorization (e.g., invertible multiplier operates as factorization at a backward mode) and training neural networks [5]- [7].…”
Section: Introductionmentioning
confidence: 99%
“…cryptography problems [2]) and machine learning (e.g. training neural networks [3], [4]). The Hamiltonian is constructed by a network of spins (probabilistic nodes) with interactions among them.…”
Section: Introductionmentioning
confidence: 99%