Lecture Notes in Computer Science
DOI: 10.1007/978-3-540-68860-0_3
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Ying Yang System, Best Harmony Learning, and Gaussian Manifold Based Family

Abstract: Firstly proposed in 1995 and systematically developed in the past decade, Bayesian YingYang learning 1) is a statistical approach for a two pathway featured intelligent system via two complementary Bayesian representations of a joint distribution on the external observation X and its inner representation R, which can be understood from a perspective of the ancient Ying-Yang philosophy. We have q(X, R) = q(X|R)q(R) as Ying that is primary, with its structure designed according to tasks of the system, and p(X, R… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 76 publications
0
16
0
Order By: Relevance
“…The simplest and widely studied example is Gaussian based linear regression, see Table 2c for some examples. Third, in addition to these structures, the knowledge is also represented by jointly, which may be further confined by a background knowledge via a priori structure q( | ) with a unknown parameter set q , for which readers are referred to several choices discussed in [52].…”
Section: Type 3: Optimizations For Model Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…The simplest and widely studied example is Gaussian based linear regression, see Table 2c for some examples. Third, in addition to these structures, the knowledge is also represented by jointly, which may be further confined by a background knowledge via a priori structure q( | ) with a unknown parameter set q , for which readers are referred to several choices discussed in [52].…”
Section: Type 3: Optimizations For Model Selectionmentioning
confidence: 99%
“…5b can be determined as an upper bound estimate of k * during parameter learning on * and * by only implementing Stage I in Fig. 5a, which can significantly reduce the computational cost needed for a two stage implementation [52]. However, the performance of this automatic model selection will deteriorate as the sample size N reduces.…”
Section: Maximizing H ( P Q) Forces Q(x|r)q(r) To Match P(r|x) P(x) mentioning
confidence: 99%
“…The two formulations, i.e., FA(a) and FA(b), are equivalent under the Maximum Likelihood principle for parameter learning, but they are different under the BYY harmony learning [14] for selecting m which will be introduced in Section 3.3&4.1. In the sequel, we assume µ = 0, Σ e = σ …”
Section: Factor Analysis and Serval Model Selection Criteriamentioning
confidence: 99%
“…However, the conditions will be probably violated when N and SNR is small. Another approach to tackling model selection problems is the Bayesian YingYang (BYY) harmony learning theory [14]. We defer its detailed introduction to the next section.…”
Section: Dnll and Byy-fa(a)mentioning
confidence: 99%
See 1 more Smart Citation