2014
DOI: 10.4304/jcp.9.10.2258-2265
|View full text |Cite
|
Sign up to set email alerts
|

ANN in Hardware with Floating Point and Activation Function Using Hybrid Methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…There is a study that have multiplied matrix using 64-bit floating-point number format [6]. Artificial neural network design with floating point numbers and training of this network are among these applications [7][8][9][10]. A customized single instruction multiple data (SIMD) is designed using a 16-bit floatingpoint number [11].…”
Section: Ieee754 Floating-point Number Formatmentioning
confidence: 99%
“…There is a study that have multiplied matrix using 64-bit floating-point number format [6]. Artificial neural network design with floating point numbers and training of this network are among these applications [7][8][9][10]. A customized single instruction multiple data (SIMD) is designed using a 16-bit floatingpoint number [11].…”
Section: Ieee754 Floating-point Number Formatmentioning
confidence: 99%
“…An intuitive approach is to use Taylor series expansion around origin and is used by [52, 53] at the 4th and 5th order, respectively. In [54, 55] the authors propose two approaches, one composed of a PWL approximation coupled with a RA-LUT and one composed of the PWL approximation with a combinatorial logic circuit. The authors compare the solutions with classic (Alippi, PLAN) PWL approximation focusing on resources required and accuracy degradation.…”
Section: Activation Functions For Fast Computationmentioning
confidence: 99%