2019
DOI: 10.1145/3309882
|View full text |Cite
|
Sign up to set email alerts
|

Low-Cost Stochastic Hybrid Multiplier for Quantized Neural Networks

Abstract: With increased interests of neural networks, hardware implementations of neural networks have been investigated. Researchers pursue low hardware cost by using different technologies such as stochastic computing (SC) and quantization. More specifically, the quantization is able to reduce total number of trained weights and results in low hardware cost. SC aims to lower hardware costs substantially by using simple gates instead of complex arithmetic operations. However, the advantages of both quantization and SC… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…The computing robustness, fault tolerant nature, scalability and reduced consumption footprint are among the key characteristics that made this fruitful technology become popular in recent research works. The investigations aim to develop effective SC-based architectures that can be beneficially applied in image processing algorithms [3], [4], [5], [6], general purpose digital filter structures [7], [8], [9], [10], error correction hardware solutions [11], and artificial neural networks (ANNs) [12]. The cost of the aforementioned attributes is a trade-off between precision and latency in signal representations, since the longer the processed bit stream the higher precision is achieved.…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The computing robustness, fault tolerant nature, scalability and reduced consumption footprint are among the key characteristics that made this fruitful technology become popular in recent research works. The investigations aim to develop effective SC-based architectures that can be beneficially applied in image processing algorithms [3], [4], [5], [6], general purpose digital filter structures [7], [8], [9], [10], error correction hardware solutions [11], and artificial neural networks (ANNs) [12]. The cost of the aforementioned attributes is a trade-off between precision and latency in signal representations, since the longer the processed bit stream the higher precision is achieved.…”
Section: A Related Workmentioning
confidence: 99%
“…Similarly, the NOT gate (inverter) provides 1 − p if the input stream x is characterized by generating probability p. Various approximating circuits have been designed which execute more accurate addition and subtraction operations, see the references [12], [16], [37], [40] for more detail.…”
Section: ) Addition and Subtractionmentioning
confidence: 99%
“…For example, multiplication operations are implemented using simple standard AND gates in SC [2]; Additions can be achieved by OR gates as proposed in [3]. In terms of applications, previous works [4][5][6][7][8][9][10][11][12] have applied the SC technology to implementation of NNs to achieve energy-efficient and low-power hardware designs. For instance, Li et al [5] proposed a NN implemented using stochastic components only, which successfully reduced the power consumption.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, it is crucial to design fault-robust and noise-robust algorithms. Stochastic Computing (SC) has introduced a hardware-efficient bitstream computational paradigm that manipulates data in the form of non-stationary Bernoulli sequences [1] to provide the same level of significance to all bits, and thereby be robust to bit-flipping errors [2]. Moreover, SC has the capability to reduce the hardware resource utilisation by using simple logic gates instead of the sophisticated arithmetic units [3][4][5][6].…”
mentioning
confidence: 99%
“…A few studies have recently investigated the use of bitstream processing in the context of neural networks [2,[4][5][6]. Most of the preceding bitstream processing neural networks operate at either full precision or 2-or-more bit quantisation of the weights and activations.…”
mentioning
confidence: 99%