1998
DOI: 10.1016/s0925-2312(98)00034-4
|View full text |Cite
|
Sign up to set email alerts
|

Theoretical aspects of the SOM algorithm

Abstract: The SOM algorithm is very astonishing. On the one hand, it is very simple to write down and to simulate, its practical properties are clear and easy to observe. But, on the other hand, its theoretical properties still remain without proof in the general case, despite the great efforts of several authors. In this paper, we pass in review the last results and provide some conjectures for the future work.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
107
0
8

Year Published

1998
1998
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 189 publications
(116 citation statements)
references
References 49 publications
1
107
0
8
Order By: Relevance
“…As recommended (Cottrell et al, 1998;Mulier and Cherkassky, 1995;Song and Hopke, 1996), a twophase learning algorithm was applied. The first learning phase (rough learning) was characterized by a strongly decreasing learning rate and neighbourhood that allowed general mapping with large groups of neurons responding to similar data (Kohonen, 2001).…”
Section: Methodsmentioning
confidence: 99%
“…As recommended (Cottrell et al, 1998;Mulier and Cherkassky, 1995;Song and Hopke, 1996), a twophase learning algorithm was applied. The first learning phase (rough learning) was characterized by a strongly decreasing learning rate and neighbourhood that allowed general mapping with large groups of neurons responding to similar data (Kohonen, 2001).…”
Section: Methodsmentioning
confidence: 99%
“…However, if the primary goal is clustering, a fixed topology puts restrictions on the map and topology preservation often cannot be achieved [30]. SOM does not possess a cost function in the continuous case and its mathematical investigation is difficult [9]. However, if the winner is chosen as the neuron i with minimum averaged distance n l=1 h λ (nd (i, l))d( x j , w l ), it optimizes the cost…”
Section: Neural Gasmentioning
confidence: 99%
“…However, a fixed prior lattice as chosen in SOM might be suboptimal for a given task depending on the data topology and topological mismatches can easily occur [30]. SOM does not possess a cost function in the continuous case, and the mathematical analysis is quite difficult unless variations of the original learning rule are considered for which cost functions can be found [9,16]. NG optimizes a cost function which, as a limit case, yields the quantization error [21].…”
Section: Introductionmentioning
confidence: 99%
“…This is possible because SOMs perform a non-linear projection of the input data onto the elements of a regular array, usually of low dimension [9]. The main characteristics of the projection is the preservation of neighborhood relations in the output space, which makes possible to see more clearly the structure hidden in the high-dimensional data, such as clusters [10,11].…”
Section: Introductionmentioning
confidence: 99%