2008
DOI: 10.1007/s00422-008-0275-4
|View full text |Cite
|
Sign up to set email alerts
|

Optimal learning rules for familiarity detection

Abstract: It has been suggested that the mammalian memory system has both familiarity and recollection components. Recently, a high-capacity network to store familiarity has been proposed. Here we derive analytically the optimal learning rule for such a familiarity memory using a signalto-noise ratio analysis. We find that in the limit of large networks the covariance rule, known to be the optimal local, linear learning rule for pattern association, is also the optimal learning rule for familiarity discrimination. The c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
14
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 18 publications
2
14
0
Order By: Relevance
“…Although this is not a very elegant solution, as it requires a multiplication operation and a duplication of the synaptic weights, it does show that the network energy is not a purely theoretical quantity. (For other network implementations that read out the network energy, see, e.g., Bogacz, Brown, & Giraud-Carrier, 2001;Greve et al, 2009. ) The time derivative of the energy can be easily calculated in neural circuits once the energy has been extracted, for instance, using short-term synaptic depression (Puccini, Sanchez-Vives, & Compte, 2007).…”
Section: Network Setupmentioning
confidence: 97%
See 4 more Smart Citations
“…Although this is not a very elegant solution, as it requires a multiplication operation and a duplication of the synaptic weights, it does show that the network energy is not a purely theoretical quantity. (For other network implementations that read out the network energy, see, e.g., Bogacz, Brown, & Giraud-Carrier, 2001;Greve et al, 2009. ) The time derivative of the energy can be easily calculated in neural circuits once the energy has been extracted, for instance, using short-term synaptic depression (Puccini, Sanchez-Vives, & Compte, 2007).…”
Section: Network Setupmentioning
confidence: 97%
“…It can be shown that of all local additive learning rules, rule 2.1 is optimal, as it provides the highest capacity in the limit of large N, M (Greve et al, 2009). During the subsequent test phase, the network's performance is evaluated.…”
Section: Network Setupmentioning
confidence: 99%
See 3 more Smart Citations