2021 IEEE Region 10 Symposium (TENSYMP) 2021
DOI: 10.1109/tensymp52854.2021.9551000
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating the Activation Function Selection for Hybrid Deep Neural Networks – FPGA Implementation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…While the Hard sigmoid is not as commonly used as the logistic sigmoid, it can be used, for example, in binarized neural network with stochastic activation functions [253] -the binaryized neural networks can lead to much faster inference than regular neural networks, e.g., Courbariaux et al reached up to 7× speed up without any loss in classification accuracy [253] (however, even better speed-ups can be obtained using, for example, field-programmable gate array (FPGA) implementations as in [255]).…”
Section: Hard Sigmoidmentioning
confidence: 99%
“…While the Hard sigmoid is not as commonly used as the logistic sigmoid, it can be used, for example, in binarized neural network with stochastic activation functions [253] -the binaryized neural networks can lead to much faster inference than regular neural networks, e.g., Courbariaux et al reached up to 7× speed up without any loss in classification accuracy [253] (however, even better speed-ups can be obtained using, for example, field-programmable gate array (FPGA) implementations as in [255]).…”
Section: Hard Sigmoidmentioning
confidence: 99%