2014
DOI: 10.1109/tvlsi.2012.2232321
|View full text |Cite
|
Sign up to set email alerts
|

Efficient VLSI Implementation of Neural Networks With Hyperbolic Tangent Activation Function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
56
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 134 publications
(56 citation statements)
references
References 17 publications
0
56
0
Order By: Relevance
“…In [39] the authors propose a comparison between two FPGA architectures which uses floating-point accelerators based on RA-LUT to compute fast AFs. The first solution, refined from the one proposed in [40, 41], implements the NN on a soft processor and computes the AFs through a smartly spaced RA-LUT. The second solution is an arithmetic chain coordinated by a VHDL finite state machine.…”
Section: Activation Functions For Fast Computationmentioning
confidence: 99%
“…In [39] the authors propose a comparison between two FPGA architectures which uses floating-point accelerators based on RA-LUT to compute fast AFs. The first solution, refined from the one proposed in [40, 41], implements the NN on a soft processor and computes the AFs through a smartly spaced RA-LUT. The second solution is an arithmetic chain coordinated by a VHDL finite state machine.…”
Section: Activation Functions For Fast Computationmentioning
confidence: 99%
“…For a neuron present in the hidden layer, the weight correction is governed by (2) in which the local gradient δ j (n) is defined by (4) as given below.…”
Section: Candidate Ffnn For Implementationmentioning
confidence: 99%
“…An efficient approach to approximate the hyperbolic tangent activation function was proposed in [2]. The architecture proposed in this paper is based on piecewise linear [2]. The maximum allowable error is used as the design parameter in this architecture.…”
Section: A Hyperbolic Tangent Activation Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…When implementing this algorithm in an embedded environment, the limited computational capabilities calls for a trade-off among precision, memory footprint, and computational cost. Different studies tested the embedded implementation of NNs to achieve optimal results, either by re-arranging the operations required to compute the linear part of the NN [17] to fully exploit pipelining, or by speeding up the costly non-linear activation function through different numerical approximation [18,19,20,21,22,23,24].…”
Section: Introductionmentioning
confidence: 99%