2002
DOI: 10.1016/s0925-2312(01)00649-x
|View full text |Cite
|
Sign up to set email alerts
|

On the generative probability density model in the self-organizing map

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2003
2003
2014
2014

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…Recently, attempts have been made to redefine SOM within a probabilistic framework; among these, Kostiainen and Lampinen [8] formulate a probability density model for which SOM training gives the maximum-likelihood estimate, based on the local error function. Yin and Allinson [9] propose the self-organizing mixture network (SOMN) for probability density estimation.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, attempts have been made to redefine SOM within a probabilistic framework; among these, Kostiainen and Lampinen [8] formulate a probability density model for which SOM training gives the maximum-likelihood estimate, based on the local error function. Yin and Allinson [9] propose the self-organizing mixture network (SOMN) for probability density estimation.…”
Section: Discussionmentioning
confidence: 99%
“…The best point estimates of the values of the hyperparameters are the mode of the posterior distribution (5) If uninformative priors are chosen, this is equivalent to maximizing the evidence or marginal likelihood (6) The usual normal prior is chosen for the weights (7) where is the number of weights in (with dimensions ). Let us now define (8) Using a second-order Taylor expansion of (8), we find that the evidence (6) can now be approximated by (9) where is the value of at the maximum of the posterior distribution (6), and is the Hessian of evaluated at . The log-evidence for and is, thus, given by (10) where all the constant terms have been grouped as .…”
Section: Gtm and The Evidence Frameworkmentioning
confidence: 99%
“…This technique seeks to identify good maps by looking at the arrival of the equiprobabilistic [13]. Maps derived from this heuristic have been shown to lead to sensible density estimates, that is, to models exhibiting good generalization abilities [14]. The conjecture is that these maps possess also good organization properties.…”
Section: B Training Trajectoriesmentioning
confidence: 99%
“…These models vary in the form of the interactions, they assume the hidden generators may follow in generating the observations. Some extensions and reformulation of the Kohonen model were proposed in literature: probabilistic self-organizing maps [2,23], a probabilistic generalization of Kohonen's SOM [36] which maximize the variational free-energy that sums data loglikelihood and Kullback-Leibler divergence between a normalized neighborhood function and the posterior distribution on the components, given data. We have also Soft topographic vector quantization (STVQ), given some divergence measure; between data items and neurons where a new error function is minimized [12,13].…”
Section: Introductionmentioning
confidence: 99%