1978
DOI: 10.1111/j.2517-6161.1978.tb01654.x
|View full text |Cite
|
Sign up to set email alerts
|

A Quasi-Bayes Sequential Procedure for Mixtures

Abstract: Coherent Bayes sequential learning and classification procedures are often useless in practice because of ever-increasing computational requirements. On the other hand, computationally feasible procedures may not resemble the coherent solution, nor guarantee consistent learning and classification. In this paper, a particular form of classification problem is considered and a "quasi-Bayes" approximate solution requiring minimal computation is motivated and defined. Convergence properties are established and a n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0

Year Published

1990
1990
2015
2015

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(37 citation statements)
references
References 7 publications
0
37
0
Order By: Relevance
“…To overcome this difficulty, on-line quasi-Bayes learning (cf. Huo and Lee, 1997;Smith and Makov, 1978) first approximates the successive posterior distributions by the ''closest'' tractable distribution within a given class P, under the criterion that both distributions have the same mode, and the EM algorithm is next applied to estimate the hyperparameters f of the approximate posterior distribution and model parameters l are incrementally updated. Empirical evidence showed that the quasi-Bayes algorithm in general converges to a good solution and it has a similar behavior with the batch MAP algorithm (cf.…”
Section: Titterington's Recursive Estimatormentioning
confidence: 99%
“…To overcome this difficulty, on-line quasi-Bayes learning (cf. Huo and Lee, 1997;Smith and Makov, 1978) first approximates the successive posterior distributions by the ''closest'' tractable distribution within a given class P, under the criterion that both distributions have the same mode, and the EM algorithm is next applied to estimate the hyperparameters f of the approximate posterior distribution and model parameters l are incrementally updated. Empirical evidence showed that the quasi-Bayes algorithm in general converges to a good solution and it has a similar behavior with the batch MAP algorithm (cf.…”
Section: Titterington's Recursive Estimatormentioning
confidence: 99%
“…Most of the techniques work with the sequential form of calculating and simplifying the MODELING LAKE-CHEMISTRY DISTRIBUTIONS 443 posterior (e.g., Bernard0 and Giron 1986;Smith and Makov 1978). By simplifying the posterior after each observation, these methods avoid many computational problems associated with an exact analysis.…”
Section: Classical and Bayesian Estimation Techniquesmentioning
confidence: 99%
“…. . Techniques include the 'probabilistic teacher', in which the missing variables are randomly fixed according to their current posterior distribution, and 'quasi-Bayes' (Smith and Makov 1978) or 'fractional updating' (Titterington 1976). Techniques include the 'probabilistic teacher', in which the missing variables are randomly fixed according to their current posterior distribution, and 'quasi-Bayes' (Smith and Makov 1978) or 'fractional updating' (Titterington 1976).…”
Section: Reducing the Mixturesmentioning
confidence: 99%