2007
DOI: 10.1016/j.mcm.2006.04.004
|View full text |Cite
|
Sign up to set email alerts
|

Kohonen neural networks and genetic classification

Abstract: We discuss the property of a.e. and in mean convergence of the Kohonen algorithm considered as a stochastic process. The various conditions ensuring the a.e. convergence are described and the connection with the rate decay of the learning parameter is analyzed. The rate of convergence is discussed for different choices of learning parameters. We proof rigorously that the rate of decay of the learning parameter which is most used in the applications is a sufficient condition for a.e. convergence and we check it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…ANNs, which simulate functioning of the human brain, are frequently applied for regression 45,46 and classication purposes. 47 An ANN is consisted of articial neurons organized in layers with intra-or inter-layer connections, resulting in feedforward (standard) or feed-back networks. Each neuron is characterized by the numeric weights, which are adjusted (trained) using either supervised, if target (output) values are needed, or unsupervised algorithm.…”
Section: Articial Neural Networkmentioning
confidence: 99%
“…ANNs, which simulate functioning of the human brain, are frequently applied for regression 45,46 and classication purposes. 47 An ANN is consisted of articial neurons organized in layers with intra-or inter-layer connections, resulting in feedforward (standard) or feed-back networks. Each neuron is characterized by the numeric weights, which are adjusted (trained) using either supervised, if target (output) values are needed, or unsupervised algorithm.…”
Section: Articial Neural Networkmentioning
confidence: 99%
“…Each node in the SOM layer is fed by the input vector and is equipped with a weight vector. The weight vectors must be the same for map nodes and input vectors or the algorithm will not work (Kohonen, 1989;Song and Hopke, 1996;Kim et al, 2002;Hoffmann, 2005;Marini et al, 2005;Bianchi et al, 2007). Fig.…”
Section: Kohonen Neural Networkmentioning
confidence: 99%
“…SOM are very often used in problems of the analysis of large data structures e.g. in the problems of clustering or classification [9], [10], [11], [12], image processing [13], [14], [15], robotics [16], [17], time series forecasting [18], [19], [20] and faults detection and identification [21], [22], [23].…”
Section: Introductionmentioning
confidence: 99%