2008
DOI: 10.1007/978-3-540-78293-3_17
|View full text |Cite
|
Sign up to set email alerts
|

The Self-Organizing Maps: Background, Theories, Extensions and Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
109
0
6

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 146 publications
(115 citation statements)
references
References 113 publications
(148 reference statements)
0
109
0
6
Order By: Relevance
“…The self-organizing map (SOM) [5], also referred to as self-organizing feature map or Kohonen network, is a neural network that can be associated with vector quantization, visualization and clustering but can also be used as an approach for non-linear, implicit dimensionality reduction [17]. A SOM consists of a collection of neurons which are connected in a topological arrangement which is usually a two dimensional rectangular or hexagonal grid.…”
Section: Self-organizing Mapmentioning
confidence: 99%
“…The self-organizing map (SOM) [5], also referred to as self-organizing feature map or Kohonen network, is a neural network that can be associated with vector quantization, visualization and clustering but can also be used as an approach for non-linear, implicit dimensionality reduction [17]. A SOM consists of a collection of neurons which are connected in a topological arrangement which is usually a two dimensional rectangular or hexagonal grid.…”
Section: Self-organizing Mapmentioning
confidence: 99%
“…In both SOM algorithms the learning process is performed over a prescribed number of iterations that should lead to an asymptotic equilibrium. Even if Kohonen (2001) argued that convergence is not a problem in practice, the convergence of the learning process to an optimal solution is however an unsolved issue (convergence has been formally proved only for the univariate case, Yin, 2008). The reason is that, unlike other neural network techniques, a SOM does not perform a gradient descent along a cost function that has to be minimized (Yin, 2008).…”
Section: Theoretical Backgroundmentioning
confidence: 99%
“…Even if Kohonen (2001) argued that convergence is not a problem in practice, the convergence of the learning process to an optimal solution is however an unsolved issue (convergence has been formally proved only for the univariate case, Yin, 2008). The reason is that, unlike other neural network techniques, a SOM does not perform a gradient descent along a cost function that has to be minimized (Yin, 2008). Hence, in order to achieve reliable maps, the degree of optimality has to be assessed in other ways, e.g., by means of specific error met-rics.…”
Section: Theoretical Backgroundmentioning
confidence: 99%
“…To circumvent this issue, self-organizing maps (SOMs), a class of neural-network algorithms (Kohonen, 2001), were also computed. These techniques are now increasingly used for data visualisation, clustering and classification of large datasets (Yin, 2008). They allow representation of a multidimensional dataset by nonlinear projection of artefacts in a lower dimension space, usually represented by discrete locations in a regular 2D lattice.…”
Section: Statistical Treatment Of Morphological Datamentioning
confidence: 99%
“…Despite the loss of linearity in the output space, the topological relationships between objects (the order of the distances) are preserved (Liu and Weisberg, 2005). In our case, the use of SOMs can provide a first, exploratory step for further clustering of large datasets (Yin, 2008). Artefacts are assigned to the most similar prototype (also called codebook) vectors, which represent a set of locations summarizing the original data.…”
Section: Statistical Treatment Of Morphological Datamentioning
confidence: 99%