1993
DOI: 10.1109/78.229903
|View full text |Cite
|
Sign up to set email alerts
|

Multilayer feedforward neural networks with single powers-of-two weights

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

1997
1997
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(29 citation statements)
references
References 7 publications
0
29
0
Order By: Relevance
“…It has been concluded in [12] that the weight distribution in a neural network resembles a normal distribution when the word width is long enough. If the fan-in number is large enough (N w > 30), then according to the central limit theorem, the weighted sum can be seen as having N w independent random input variables, and the probability density function of their weighted sum can then be considered to be a normal distribution.…”
Section: The Fixed-point Quantization Analysis Of Output Functionmentioning
confidence: 99%
“…It has been concluded in [12] that the weight distribution in a neural network resembles a normal distribution when the word width is long enough. If the fan-in number is large enough (N w > 30), then according to the central limit theorem, the weighted sum can be seen as having N w independent random input variables, and the probability density function of their weighted sum can then be considered to be a normal distribution.…”
Section: The Fixed-point Quantization Analysis Of Output Functionmentioning
confidence: 99%
“…However, direct quantization of the trained floating-point weights does not yield good results. Therefore, we employ the weight quantization strategy similar to the algorithms proposed in [7,13] to retrain the weights after the direct quantization. Also, internal signals (output values of the units) are uniformly quantized from 0 to 1.…”
Section: Fixed-point Dnn Design For Phoneme Recognitionmentioning
confidence: 99%
“…There are several, such as analog, digital, hybrid, and FPGA implementation approaches for neural networks [5,6,7,8,9,10,11]. However, the number of neurons in a layer is restricted to a small number in those implementations.…”
Section: Introductionmentioning
confidence: 98%
“…2. We don't perform piecewise quantization on weights as [9], because we find DNN is more susceptible to the quantization error of weights than to that of activations, which is shown in Fig. 2.…”
Section: Quantizing Activations Into Powers-of-two Integersmentioning
confidence: 99%