2000
DOI: 10.1088/0305-4470/33/41/302
|View full text |Cite
|
Sign up to set email alerts
|

On-line learning in the Ising perceptron

Abstract: On-line learning of both binary and continuous rules in an Ising space is studied. Learning is achieved by using an artificial parameter, a weight vector J , which is constrained to the surface of a hypersphere (spherical constraint). In the case of a binary rule the generalization error decays to zero super-exponentially as exp(−Cα 2), where α is the number of examples divided by N, the size of the input vector, and C > 0. Much faster learning is obtained in the case of continuous activation functions where t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2001
2001
2020
2020

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…This assumption is violated in case that the updating of the continuous vector itself is made according to the clipped one, see [6,11]. The results are:…”
Section: The Order Parametersmentioning
confidence: 99%
“…This assumption is violated in case that the updating of the continuous vector itself is made according to the clipped one, see [6,11]. The results are:…”
Section: The Order Parametersmentioning
confidence: 99%
“…Algorithms and mathematical models for perceptron-based supervised learning can encompass a ‘teacher’ element that provides data sets and determines responses to those data, and a ‘student’ element, whose learning is directed by the teacher [15]. The biological student–teacher (BST) network consists of sets of genes within teacher and student cells that interact via promoting or repressing outputs.…”
Section: Introductionmentioning
confidence: 99%
“…The SBPI learning rules and hidden states requirements are the same as those of the well known clipped perceptron algorithm (CP, see e.g. [17]), the only difference being an additional, purely metaplastic rule, which is only applied if the answer given by the device is correct, but such that a single variable flip would result in a classification error.…”
Section: Introductionmentioning
confidence: 99%
“…This problem is in fact easier to address than the classification problem, and optimal algorithms can be found which solve it in the binary synapses case as well (see e.g. [8,17,19]); however, such algorithms are not suitable to be considered as candidates for biological models for online learning, being either too complex or requiring to perform all intermediate operations with an auxiliary device with continuous synaptic weights. The resulting differential equation set that we obtained gives some insight on the learning dynamics and about the reason for SBPI's effectiveness, and allows for a further simplification of the SBPI algorithm, yielding an even more attractive model of neuronal unit, both from the point of view of biological feasibility and of hardware manufacturing design simplicity.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation