Advances in Self-Organising Maps 2001
DOI: 10.1007/978-1-4471-0715-6_10
|View full text |Cite
|
Sign up to set email alerts
|

Recursive learning rules for SOMs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2008
2008
2015
2015

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…This resembles the way nodes are pulled in a fisherman's net, and thus this update rule was dubbed 'fisherman's rule'. It has been shown (Lee et al, 2001) that this will improve the convergence speed in the first iterations of the learning phase. The main reason for this is that, at each learning step, the different units will not be attracted in exactly the same direction (the direction of the input pattern), but will instead be pulled in a direction that depends on their immediate neighbors.…”
Section: Learning Rule (Update Phase)mentioning
confidence: 99%
See 2 more Smart Citations
“…This resembles the way nodes are pulled in a fisherman's net, and thus this update rule was dubbed 'fisherman's rule'. It has been shown (Lee et al, 2001) that this will improve the convergence speed in the first iterations of the learning phase. The main reason for this is that, at each learning step, the different units will not be attracted in exactly the same direction (the direction of the input pattern), but will instead be pulled in a direction that depends on their immediate neighbors.…”
Section: Learning Rule (Update Phase)mentioning
confidence: 99%
“…One alternative is to move the unit in the direction of the nearest unit, as was proposed by Lee et al (2001). This resembles the way nodes are pulled in a fisherman's net, and thus this update rule was dubbed 'fisherman's rule'.…”
Section: Learning Rule (Update Phase)mentioning
confidence: 99%
See 1 more Smart Citation
“…Here we present selected results for 8 × 8 and 16×16 neurons for two example 2-D data sets. The results for 2-D sets have been selected for a better illustration [8], [21], [24], [26]. In both sets, data are divided into P classes (centers), where P equals the number of neurons in the map.…”
Section: Optimization Of the Som On The System Levelmentioning
confidence: 99%
“…PCA is an optimal linear dimension reduction technique (in the meansquare sense); LDA finds the linear projection that maximizes the separation between classes, relative to the dispersion of data within classes. CCA (Demartines et al, 1997, Lee, 2000, 2002, on the contrary, is relatively recent. It may be seen as a non-linear generalization of PCA, but is actually a topology preserving method: it searches for vectors in the transformed space that reproduce the distances in the input space.…”
Section: Fdi Architecture and Designmentioning
confidence: 99%