2016 50th Asilomar Conference on Signals, Systems and Computers 2016
DOI: 10.1109/acssc.2016.7869646
|View full text |Cite
|
Sign up to set email alerts
|

Precise digital implementations of hyperbolic tanh and sigmoid function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(18 citation statements)
references
References 9 publications
0
18
0
Order By: Relevance
“…Approximations of sigmoid and hyperbolic tangent functions have been presented in many studies, as shown in [34][35][36]. In this work, the piecewise-linear (PWL) approximation method is adopted to implement the hyperbolic tangent and sigmoid activation function because of high approximation accuracy with the adjustable number of lines and simplicity of hardware implementation.…”
Section: Stage 2: Configurable High-performance Low-power Mlp Microarchitecturementioning
confidence: 99%
“…Approximations of sigmoid and hyperbolic tangent functions have been presented in many studies, as shown in [34][35][36]. In this work, the piecewise-linear (PWL) approximation method is adopted to implement the hyperbolic tangent and sigmoid activation function because of high approximation accuracy with the adjustable number of lines and simplicity of hardware implementation.…”
Section: Stage 2: Configurable High-performance Low-power Mlp Microarchitecturementioning
confidence: 99%
“…The weight parameter matrix and offset parameter vector of the kth, k = 1, 2, … , K, hidden layer of the compression encoder are denoted by E w, k and E b, k , respectively. The first K − 2 hidden layers use sigmoid activation function, 35 which is expressed as The last two hidden layers adopt tanh activation function, 35 which is expressed as…”
Section: Structure and Working Principle Of The Sending Endmentioning
confidence: 99%
“…where g(•) is the activation function of neurons, N m is the combination of input characteristic graphs, ⊗ represents convolution operation, k n−1 rm is the convolution kernel matrix, and b r m is the bias matrix. The activation functions commonly used in neural networks include sigmoid function [12], ReLU function [13], tanh function [14], etc.…”
Section: A Convolution Neural Networkmentioning
confidence: 99%