2001
DOI: 10.1007/3-540-44597-8_26
|View full text |Cite
|
Sign up to set email alerts
|

Biological Grounding of Recruitment Learning and Vicinal Algorithms in Long-Term Potentiation

Abstract: Biological networks are capable of gradual learning based on observing a large number of exemplars over time as well as of rapidly memorizing specific events as a result of a single exposure. The focus of research in neural networks has been on gradual learning, and the modeling of one-shot memorization has received relatively little attention. Nevertheless, the development of biologically plausible computational models of rapid memorization is of considerable value, since such models would enhance our underst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
18
0

Year Published

2001
2001
2009
2009

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(18 citation statements)
references
References 34 publications
0
18
0
Order By: Relevance
“…It performs this reasoning with sparse, structured coding, like the best connectionist models to date (van der Velde and de Kamps, 2006), and uses simple computational units that could be built from neurons or collections of neurons. Much of this has already been discussed relative to SHRUTI, and our model is similar enough to SHRUTI that much of that argument holds (Shastri, 2001b). There seems to be no way to eliminate some common attentional mechanism like the central binder of § 2.2 (Simons and Chabris, 1995).…”
Section: Biological Plausibilitymentioning
confidence: 55%
“…It performs this reasoning with sparse, structured coding, like the best connectionist models to date (van der Velde and de Kamps, 2006), and uses simple computational units that could be built from neurons or collections of neurons. Much of this has already been discussed relative to SHRUTI, and our model is similar enough to SHRUTI that much of that argument holds (Shastri, 2001b). There seems to be no way to eliminate some common attentional mechanism like the central binder of § 2.2 (Simons and Chabris, 1995).…”
Section: Biological Plausibilitymentioning
confidence: 55%
“…All of these incremental changes can be implemented in the SHRUTI model via relatively simple means involving the recruitment of nodes, by strengthening latent connections as a response to frequent simultaneous activations (Shastri, 2001;Shastri and Wendelken, 2003;Wendelken and Shastri, 2003).…”
Section: Mother(x)  Mother(xy)mentioning
confidence: 99%
“…Convergent presynaptic activity at s i 's can lead to associative LTP of naive s i 's and increase their weights by w ltp if the following conditions hold: (i) the total (convergent) activity arriving at s i 's exceeds p , (ii) this activity is synchronous, i.e., arrives with a maximum lead/lag of !, (iii) such synchronous activity repeats at least times, and (iv) the interval between two successive arrivals of convergent activity is at most iai . It has been shown [17] that recruitment learning algorithms [6,15] proposed for one-shot learning in connectionist networks can be firmly grounded in LTP.…”
Section: Long-term Potentiationmentioning
confidence: 99%