2021
DOI: 10.1109/tcsi.2020.3047331
|View full text |Cite
|
Sign up to set email alerts
|

Fully Integrated Analog Machine Learning Classifier Using Custom Activation Function for Low Resolution Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 26 publications
0
15
0
Order By: Relevance
“…Instead of doing error correction at circuit level, which increases area and power cost, we use an error-aware ANN design methodology that takes into account the difference between ideal activation functions and the actual activation functions implemented using analog circuits to produce high accuracy AI predictions. In our prior study that focused on an image classification task 36 , 37 we developed an error-aware hardware-software co-design interface in which SPICE simulations are used to characterize a unit activation function, and the characterization data is imported into Matlab for training the ANN (see Fig. 7 e).…”
Section: Methodsmentioning
confidence: 99%
“…Instead of doing error correction at circuit level, which increases area and power cost, we use an error-aware ANN design methodology that takes into account the difference between ideal activation functions and the actual activation functions implemented using analog circuits to produce high accuracy AI predictions. In our prior study that focused on an image classification task 36 , 37 we developed an error-aware hardware-software co-design interface in which SPICE simulations are used to characterize a unit activation function, and the characterization data is imported into Matlab for training the ANN (see Fig. 7 e).…”
Section: Methodsmentioning
confidence: 99%
“…The custom analog activation functions resemble their ideal, mathematical counterparts, but are not exactly the same. To ensure good matching between software ANN model and IC measurements, we use a hardware-software co-design methodology in which amplifier transfer curves, and their derivatives, are used to train the ANN model iteratively 38 . Stochastic gradient descent is used to optimize the ANN model by minimizing the loss function at each epoch.…”
Section: Methodsmentioning
confidence: 99%
“…Sigmoid function, Tanh function, and Relu function are widely used activation functions [25][26][27][28].…”
Section: Cnnmentioning
confidence: 99%
“…Sigmoid function, Tanh function, and Relu function are widely used activation functions [ 25 – 28 ]. The image of activation functions is exhibited in Figure 5 .…”
Section: Investment Risk and DL Model Analysis Of Multinational Enter...mentioning
confidence: 99%