1960
DOI: 10.21236/ad0241531
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Switching Circuits

Abstract: Ada^livft or "ieuming" ayatemB can automatlc-illy modify their ovm structures to OY^imizt performance based on past experiences. The system desiwier "teaches" by shoving the system examples of l..puc slyiilo or pattemp and simultaneously what he would like the output to be for each input. The cyBtem In turn organizes Itself to comply as wall as possible with the wishes of the designer. An adaptive pattern classification machine (called "Adallne", for adaptive linear) hac been devised to illustrate adaptive beh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
1,220
0
78

Year Published

1996
1996
2014
2014

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 2,536 publications
(1,302 citation statements)
references
References 0 publications
4
1,220
0
78
Order By: Relevance
“…Whilst the SOM uses unsupervised learning, the mapping of the number line to a precise value is achieved using a single layer perceptron (SLP), trained using the delta learning rule (Widrow & Hoff, 1960). This mapping models the association of precise values to a foundation of numerical magnitudes, with such transcoding central to all of the models of numerical processing discussed.…”
Section: Simulating Small Number Detectionmentioning
confidence: 99%
“…Whilst the SOM uses unsupervised learning, the mapping of the number line to a precise value is achieved using a single layer perceptron (SLP), trained using the delta learning rule (Widrow & Hoff, 1960). This mapping models the association of precise values to a foundation of numerical magnitudes, with such transcoding central to all of the models of numerical processing discussed.…”
Section: Simulating Small Number Detectionmentioning
confidence: 99%
“…Second, to determine the weights corresponding to a given learning experiment we need a learning algorithm. Learning is modeled here with a refinement of the RW learning rule, first derived by Widrow and Hoff (1960) and introduced to animal learning theory by Blough (1975). In this algorithm weight W i changes according to…”
Section: Modelmentioning
confidence: 99%
“…They are the weights w ij , the noise control parameters, a j of the CRBM and the weights w k in the output SLP layer. The CRBM parameters are optimised by minimising contrastive divergence (MCD) [11] while the SLP is trained using the delta rule [16].…”
Section: Architecturementioning
confidence: 99%