1989
DOI: 10.1049/el:19891114
|View full text |Cite
|
Sign up to set email alerts
|

Efficient implementation of piecewise linear activation function for digital VLSI neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
25
0

Year Published

1993
1993
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 65 publications
(25 citation statements)
references
References 2 publications
0
25
0
Order By: Relevance
“…Our approach gives a better maximum error than both the first and second order approximation of [11]. [8] [ -8,8) N/A 0.0490 0.0247 Alippi et al [9] [ -8,8) N/A 0.0189 0.0087 Amin et al [10] [ [12] [-5,5] 5 0.0050 n/a Basterretxea et al (q=3) [13] [ -8,8) N/A 0.0222 0.0077 Tommiska (337) [14] [ -8,8) N/A 0.0039 0.0017 Tommiska (336) [14] [ -8,8) N/A 0.0077 0.0033 Tommiska (236) [14] [-4,4) N/A 0.0077 0.0040 Tommiska (235) [14] [ …”
Section: Resultsmentioning
confidence: 93%
See 1 more Smart Citation
“…Our approach gives a better maximum error than both the first and second order approximation of [11]. [8] [ -8,8) N/A 0.0490 0.0247 Alippi et al [9] [ -8,8) N/A 0.0189 0.0087 Amin et al [10] [ [12] [-5,5] 5 0.0050 n/a Basterretxea et al (q=3) [13] [ -8,8) N/A 0.0222 0.0077 Tommiska (337) [14] [ -8,8) N/A 0.0039 0.0017 Tommiska (336) [14] [ -8,8) N/A 0.0077 0.0033 Tommiska (236) [14] [-4,4) N/A 0.0077 0.0040 Tommiska (235) [14] [ …”
Section: Resultsmentioning
confidence: 93%
“…Furthermore, there is considerable variance within each category. For example, an A-Law companding technique is used in [8], a sum of steps approximation is used in [9], a multiplier-less piecewise approximation is presented in [10] and a recursive piecewise multiplier-less approximation is presented in [13]. An elementary function generator capable of multiple activation functions using a first and second order polynomial approximation is detailed in [11].…”
Section: Related Researchmentioning
confidence: 99%
“…The implementation of the neuron's nonlinear activation function and their derivatives used by the learning algorithm, is often solved by a piecewise linear approximation [4,5,7,[17][18][19][20] . However, no implementation method has emerged as a universal solution.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…Mathematical approximations of membership functions [5], [6] are also widely used but any errors produced by crude approximations can be problematic for adaptive systems as a continuous function approximation may not prove to be continuous in all segments.…”
Section: A Fuzzificationmentioning
confidence: 99%