1998
DOI: 10.1080/095400998116521
|View full text |Cite
|
Sign up to set email alerts
|

Recovery of Unrehearsed Items in Connectionist Models

Abstract: W hen gradient-descent models with hidden units are retrained on a portion of a previously lear ned set of items, performance on both the relear ned and unrelear ned items improves. Previous explanations of this phenom enon have not adequately distinguished recover y, which is dependent on original learning, from generalization, which is independent of original learning. Using a measure of vector similarity to track global changes in the weight state of three-layer networks, we show that (a) unlike in networks… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Year Published

2001
2001
2014
2014

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(12 citation statements)
references
References 22 publications
1
11
0
Order By: Relevance
“…2 Recovery in the VOCAB network (20 runs). When the VOCAB network relearns a portion of the originally learned set of training patterns, performance on both sets R and U improves (from Atkins & Murre, 1998). Pattern error was de®ned as the sum of squares of the dierence between the actual response and the target response across all output units and across all items in the set, divided by the number of patterns so as to eliminate the eects of set size (Rumelhart et al, 1986, p. 323) 1998).…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…2 Recovery in the VOCAB network (20 runs). When the VOCAB network relearns a portion of the originally learned set of training patterns, performance on both sets R and U improves (from Atkins & Murre, 1998). Pattern error was de®ned as the sum of squares of the dierence between the actual response and the target response across all output units and across all items in the set, divided by the number of patterns so as to eliminate the eects of set size (Rumelhart et al, 1986, p. 323) 1998).…”
Section: Resultsmentioning
confidence: 99%
“…This was the procedure used by Hinton and Sejnowski (1986) to test their Boltzmann machine network. With Hebbian learning, the forgetting produced by this method is approximately equivalent to training on other random patterns, the approach used by Atkins and Murre (1998) and Hinton and Plaut (1987).…”
Section: Methodsmentioning
confidence: 97%
See 3 more Smart Citations