2016
DOI: 10.1007/978-3-319-46687-3_26
|View full text |Cite
|
Sign up to set email alerts
|

Time-Domain Weighted-Sum Calculation for Ultimately Low Power VLSI Neural Networks

Abstract: A time-domain analog weighted-sum calculation model is proposed based on an integrate-and-fire-type spiking neuron model. The proposed calculation model is applied to multi-layer feedforward networks, in which weighted summations with positive and negative weights are separately performed in each layer and summation results are then fed into the next layers without their subtraction operation. We also propose very large-scale integrated (VLSI) circuits to implement the proposed model. Unlike the conventional a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 29 publications
0
15
0
Order By: Relevance
“…This is due to the overhear associated with coding a back-propagation algorithm on an arbitrary network and is forgone in favor of a randomly generated network. Since a standard activation function is used to predict the transfer function of CMOS inverter, the virtual network could theoretically be trained using a standard back-propagation algorithm [10,13,21,22]. This training would need to be tested for its rate of convergence, but other than there appears to be no reason why it would not be effective as the derivative which is still a linear combination of the output.…”
Section: Back Propagationmentioning
confidence: 99%
See 4 more Smart Citations
“…This is due to the overhear associated with coding a back-propagation algorithm on an arbitrary network and is forgone in favor of a randomly generated network. Since a standard activation function is used to predict the transfer function of CMOS inverter, the virtual network could theoretically be trained using a standard back-propagation algorithm [10,13,21,22]. This training would need to be tested for its rate of convergence, but other than there appears to be no reason why it would not be effective as the derivative which is still a linear combination of the output.…”
Section: Back Propagationmentioning
confidence: 99%
“…If a programmable version of the forward propagation network could be implemented, then it is possible for some new computer architectures to be explored [3,21,22]. The idea here is that a chip could be included in the CPU which would be programmable.…”
Section: New Computer Architecturementioning
confidence: 99%
See 3 more Smart Citations