Proceedings of the IEEE Custom Integrated Circuits Conference
DOI: 10.1109/cicc.1992.591335
|View full text |Cite
|
Sign up to set email alerts
|

A Cascadable Neural Network Chip Set With On-chip Learning Using Noise And Gain Annealing

Abstract: We describe a VLSI neural network with on-chip learning that settles through 'thermal annealing'. Full cascadability and connectivity is ensured by a 32 neuron1496 synapse chip and a 1024 synapse chip. The recurrent analog network uses the Boltzmann Machine learning rule to update the weight at each synapse. Two annealing modes, noise and gain, representing the control parameter 'temperature' help settle the network to a global energy minimum. Extrapolated measurements indicate that the network can perform up … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 3 publications
0
4
0
Order By: Relevance
“…A special case of stochastic neural networks, Boltzmann machines, have also been popular in neuromorphic systems. The general Boltzmann machine was utilized in neuromorphic systems primarily in the early 1990's [12], [1193]- [1199], but it has seen occasional implementations in more recent publications [1200]- [1203]. A more common use of the Boltzmann model is the restricted Boltzmann machine, because the training time is significantly reduced when compared with a general Boltzmann machine.…”
Section: Network Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…A special case of stochastic neural networks, Boltzmann machines, have also been popular in neuromorphic systems. The general Boltzmann machine was utilized in neuromorphic systems primarily in the early 1990's [12], [1193]- [1199], but it has seen occasional implementations in more recent publications [1200]- [1203]. A more common use of the Boltzmann model is the restricted Boltzmann machine, because the training time is significantly reduced when compared with a general Boltzmann machine.…”
Section: Network Modelsmentioning
confidence: 99%
“…These approaches include the least-mean-squares algorithm [750], [787], [1025], [1026], weight perturbation [19], [625], [655], [669], [682], [698], [699], [708], [710], [712], [713], [715], [736], [834], [835], [841], [845]- [847], [856], [1078]- [1080], [1098], [1099], [1148], [1304], training specifically for convolutional neural networks [1305], [1306] and others [169], [220], [465], [714], [804], [864], [865], [1029], [1049], [1307]- [1320]. Other on-chip supervised learning mechanisms are built for particular model types, such as Boltzmann machines, restricted Boltzmann machines, or deep belief networks [12], [627], [1135], [1193], …”
Section: A Supervised Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…6,97,98 The board can contain up to six neural network chips. The chips support meanfield annealing and Boltzmann learning for on-chip weight changes.…”
Section: Neural Network Implementations In Analog Hardwarementioning
confidence: 99%