1992 IEEE International Solid-State Circuits Conference Digest of Technical Papers
DOI: 10.1109/isscc.1992.200450
|View full text |Cite
|
Sign up to set email alerts
|

Neuro chips with on-chip backprop and/or Hebbian learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 2 publications
0
10
0
Order By: Relevance
“…It is possible to design analog circuits that approximate these characteristics but the result is a rather large synapse and thus an expensive solution. Two examples of backpropagation learning chips support this claim: Shima's [14] synapse consumes 1 250 K and Morie's [15] synapse consumes 525 K . These are much larger than the 4.9 K synapse that is described later in this paper.…”
Section: Learning Algorithms For Hardware Implementationsmentioning
confidence: 97%
See 1 more Smart Citation
“…It is possible to design analog circuits that approximate these characteristics but the result is a rather large synapse and thus an expensive solution. Two examples of backpropagation learning chips support this claim: Shima's [14] synapse consumes 1 250 K and Morie's [15] synapse consumes 525 K . These are much larger than the 4.9 K synapse that is described later in this paper.…”
Section: Learning Algorithms For Hardware Implementationsmentioning
confidence: 97%
“…Some implementations have consumed very low power [12] or have supported low power standby modes [10], [13]. Onchip learning has been demonstrated [14]- [19]. Some analog implementations have been very flexible [16], [20] and others have offered board-level alterability [21].…”
Section: Introductionmentioning
confidence: 99%
“…This is not just an approximation of the gradient descent, but most of the time exhibits significantly different behavior, particularly when the learning rate is not small enough with respect to the number of training patterns [13], and learning appears slower when more weights reach the limits of the clipping function. During training other clipping methods were studied [14,15], for example on-chip training was first performed without quantization to get an initial estimate of the weights, then training was completed on a quantized network. In another approach, neural networks were trained off-chip and then weights were clipped for recall.…”
Section: The Effects Of Quantization Error On Backpropagationmentioning
confidence: 99%
“…In this section, we will give two examples of the implementation of the algorithms for a feedforward network using and the squared error cost function. Thus, topologically this version of nonlinear backpropagation maps in exactly the same way on hardware as the original backpropagation algorithm-the only difference being how is calculated for backpropagation) [9] (see also [10]). …”
Section: Test Of Algorithmmentioning
confidence: 99%