2020 IEEE 31st International Conference on Application-Specific Systems, Architectures and Processors (ASAP) 2020
DOI: 10.1109/asap49362.2020.00020
|View full text |Cite
|
Sign up to set email alerts
|

Training Neural Nets using only an Approximate Tableless LNS ALU

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…However, it can be seen in Table 2 that for very short word lengths, the average arithmetic error for bases other than base-2 is much lower than 0.25 × ULP LNS , especially for the subtraction table. For example, base-1.417 gives an arithmetic error of 17.1544% of ULP LNS for Q(2, 2) while base 1.415 gives 19.9971% of ULP LNS (in the log domain) for Q (2,3). These errors converge to around 0.25 × ULP LNS as we increase the word length.…”
Section: Evaluation Of Arithmetic Errormentioning
confidence: 97%
See 2 more Smart Citations
“…However, it can be seen in Table 2 that for very short word lengths, the average arithmetic error for bases other than base-2 is much lower than 0.25 × ULP LNS , especially for the subtraction table. For example, base-1.417 gives an arithmetic error of 17.1544% of ULP LNS for Q(2, 2) while base 1.415 gives 19.9971% of ULP LNS (in the log domain) for Q (2,3). These errors converge to around 0.25 × ULP LNS as we increase the word length.…”
Section: Evaluation Of Arithmetic Errormentioning
confidence: 97%
“…Vogel et al [42], present a low word length-based quantization method for LNS to be used in a neural network and have shown LNS to achieve 22.3% lower power as compared to an 8-bit fixed-point-based design. Arnold et al [3] present an approach to implement back propagation using tableless LNS ALU based on modified Mitchell's method [28] and achieve one third reduction in the hardware resources as compared to a conventional fixed-point implementation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This has prevented its adoption as a mainstream format for general-purpose computing. A common option is to use approximate LNS adder/subtracters [17,18,19,20]. The alternative approach proposed in this work is to use very low precisions (less than 8 bits) for which accurate LNS arithmetic is cheap.…”
Section: A Logarithmic Number Systems (Lns)mentioning
confidence: 99%
“…Vogel et al [26], present a low word length based quantization method for LNS to be used in a neural network and have shown LNS to achieve 22.3% lower power as compared to an 8-bit fixed point based design. Arnold et al [27] present an approach to implement back propagation using tableless LNS ALU based on modified Mitchell's method [28] and achieve one third reduction in the hardware resources as compared to a conventional fixed-point implementation.…”
Section: Related Workmentioning
confidence: 99%