2006
DOI: 10.1007/s11063-006-9008-7
|View full text |Cite
|
Sign up to set email alerts
|

BYY Harmony Learning on Finite Mixture: Adaptive Gradient Implementation and A Floating RPCL Mechanism

Abstract: In tackling the learning problem on a set of finite samples, Bayesian Ying-Yang (BYY) harmony learning has developed a new learning mechanism that makes model selection implemented either automatically during parameter learning or in help of evaluating a new class of model selection criteria. In this paper, parameter learning with automated model selection has been studied for finite mixture model via an adaptive gradient learning algorithm for BYY harmony learning on a specific bidirectional architecture (BI-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2007
2007
2014
2014

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…According to the theoretical and experimental results on this Bi-architecture of the BYY harmony learning system for Gaussian mixtures [20,17,18,19], the maximization of J(Θ k ) is capable of making model selection adaptively during parameter learning when the actual Gaussians or clusters are separated in a certain degree. That is, if we choose k to be larger than the number (k * ) of actual Gausians or clusters in the sample data, the maximization of the harmony function can make k * Gaussians to match the actual ones and simultaneously eliminate k−k * extra ones.…”
Section: Byy Harmony Learning Of Gaussian Mixturesmentioning
confidence: 99%
See 1 more Smart Citation
“…According to the theoretical and experimental results on this Bi-architecture of the BYY harmony learning system for Gaussian mixtures [20,17,18,19], the maximization of J(Θ k ) is capable of making model selection adaptively during parameter learning when the actual Gaussians or clusters are separated in a certain degree. That is, if we choose k to be larger than the number (k * ) of actual Gausians or clusters in the sample data, the maximization of the harmony function can make k * Gaussians to match the actual ones and simultaneously eliminate k−k * extra ones.…”
Section: Byy Harmony Learning Of Gaussian Mixturesmentioning
confidence: 99%
“…It has already been implemented on Guassian mixture learning and several BYY harmony learning algorithms have also been established for Gaussian mixtures [17,18,19]. Although the BYY harmony learning owns the ability of adaptive model selction, its parameter estimation has a notable deviation from the ML estimation which is consistent with true parameters.…”
Section: Introductionmentioning
confidence: 99%
“…The typical example of such case is aðnÞ ¼ a 0 =n, where a 0 is a positive constant [25]. Another choice is to fix the learning rate a as a positive constant [26,27], which we utilize here, since the initial partition is good enough that the objective function Eq.…”
Section: Algorithm 1 (Sdp)mentioning
confidence: 99%
“…To further test the validity of the algorithms, we apply them to a sample network generated from a Gaussian mixture model [25][26][27]. This model is quite related the concept of random geometric graph proposed by Penrose [29] except that we take Gaussian mixture here compared with uniform distribution in [29].…”
Section: Sample Network Generated From the Gaussian Mixture Modelmentioning
confidence: 99%
See 1 more Smart Citation