1993
DOI: 10.1109/72.217187
|View full text |Cite
|
Sign up to set email alerts
|

A CMOS analog adaptive BAM with on-chip learning and weight refreshing

Abstract: The transconductance-mode (T-mode) approach is extended to implement analog continuous-time neural network hardware systems to include on-chip Hebbian learning and on-chip analog weight storage capability. The demonstration vehicle used is a 5+5-neuron bidirectional associative memory (BAM) prototype fabricated in a standard 2-mum double-metal double-polysilicon CMOS process. Mismatches and nonidealities in learning neural hardware are not supposed to be critical if on-chip learning is available, because they … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

1994
1994
2010
2010

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 52 publications
(21 citation statements)
references
References 38 publications
0
21
0
Order By: Relevance
“…Categorized by storage types, there are five kinds of synapse circuits: capacitor only [1], [7]- [11], capacitor with refreshment [12]- [14], capacitor with EEPROM [4], digital [15], [16], and mixed D/A [17] circuits.…”
Section: B Synapse Circuitsmentioning
confidence: 99%
“…Categorized by storage types, there are five kinds of synapse circuits: capacitor only [1], [7]- [11], capacitor with refreshment [12]- [14], capacitor with EEPROM [4], digital [15], [16], and mixed D/A [17] circuits.…”
Section: B Synapse Circuitsmentioning
confidence: 99%
“…Some implementations use digital memories for more permanent weight storage [85]. The works in [64][65][66][67][68][69][70][71]73] are some other analog implementations previously discussed in Section 2.1.3 above. Although there are many advantages of implementing analog neural networks as discussed above, the disadvantage is that the analog chips are susceptible to noise and process parameter variations, and hence need a very careful design.…”
Section: Analog Neural Hardware Implementationsmentioning
confidence: 99%
“…A digitally controlled synapse circuit and an adaptation rule circuit with an R-2R ladder network, a simple control logic circuit, and an UP/DOWN counter are implemented to realize a modified technique for the backpropagation algorithm. Linares-Barranco et al also show an on-chip trainable implementation of an analog transconductance-model neural network [65]. Field Programmable Neural Arrays (FPNAs), an analog neural equivalent of FPGAs, are a mesh of analog neural models interconnected via a configurable interconnect network [66][67][68][69][70].…”
Section: Designmentioning
confidence: 99%
“…Usually a strong limitation when scaling up analog neural hardware is how systematic offsets accumulate. A common circuit technique for analog neural VLSI is the use of transconductors [22], [50]. Connecting many of them in parallel results in addition of their systematic offset components.…”
Section: Further Enhancementsmentioning
confidence: 99%
“…Many times some of these requirements can be relaxed, the topology modified, or the operations simplified, with no significant deterioration of global operation of the neural system but with a considerable boost in the hardware performance. Modifying neural algorithms to make them more VLSI-friendly and produce more efficient hardware should be a common practice among neural hardware engineers of the second type [19]- [22]. After selecting an appropriate neural algorithm the next step consists of studying how far the algorithm can be simplified without performance degradation.…”
Section: Introductionmentioning
confidence: 99%