IJCNN-91-Seattle International Joint Conference on Neural Networks 1991
DOI: 10.1109/ijcnn.1991.155621
|View full text |Cite
|
Sign up to set email alerts
|

Learning a synaptic learning rule

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
298
0

Year Published

1993
1993
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 314 publications
(299 citation statements)
references
References 0 publications
1
298
0
Order By: Relevance
“…Using gradient descent, genetic algorithms and simulated annealing, we had already found learning rules for classical conditioning problems, classification problems, and boolean problems (see [7]). Moreover, the experimental results described in section 4 qualitatively agree with learning theory applied to parametric learning rules.…”
Section: Resultsmentioning
confidence: 98%
See 2 more Smart Citations
“…Using gradient descent, genetic algorithms and simulated annealing, we had already found learning rules for classical conditioning problems, classification problems, and boolean problems (see [7]). Moreover, the experimental results described in section 4 qualitatively agree with learning theory applied to parametric learning rules.…”
Section: Resultsmentioning
confidence: 98%
“…could be defined by equation (2) and | could be a specific vector of rule parameters. If z represents the AND task, then g(z'J(.…”
Section: Extension To Parametric Learning Rulementioning
confidence: 99%
See 1 more Smart Citation
“…However, constraints set on learning rules could prevent some from being evolved such as those which include thirdor fourth-order terms. Similar experiments on the evolution of learning rules were also carried out by others [265], [266], [267], [269], [270]. Fontanari and Meir [267] used Chalmers' approach to evolve learning rules for binary perceptrons.…”
Section: B the Evolution Of Learning Rulesmentioning
confidence: 85%
“…Bengio et al's approach [265], [266] is slightly different from Chalmers' in the sense that gradient descent algorithms and simulated annealing, rather than EA's, were used to find near-optimal 's. In their experiments, four local variables and one zeroth-order, three first-order, and three second-order terms in (4) were used.…”
Section: B the Evolution Of Learning Rulesmentioning
confidence: 99%