2011
DOI: 10.1016/j.stamet.2010.05.005
|View full text |Cite
|
Sign up to set email alerts
|

Fully Bayesian analysis of the relevance vector machine with an extended hierarchical prior structure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2013
2013
2025
2025

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 12 publications
(28 citation statements)
references
References 17 publications
0
28
0
Order By: Relevance
“…Instead, we have sought throughout and hope to have given the reader a visceral sense of the appeal of the Bayesian paradigm as a statistical machine learning tool for data science. We complete by mentioning a few contributions of the Bayesian paradigm to latent variable modelling and kernel regression, with works like [13] which introduces a stable Radial Basis Function Selection via Mixture Modelling of the Sample Path, and [15] that extends it with a fully Bayesian Analysis of the Relevance Vector Machine With Extended Prior. Paper [12] proposes and develops a Bayesian computation of the Intrinsic Structure of Factor Analytic Models, drawing some of its elements from [16] where mixtures of Factor Analysers featuring Bayesian Estimation and Inference by Stochastic Simulation.…”
Section: Bayesian Paradigm In Ensemble Learning Methodsmentioning
confidence: 99%
“…Instead, we have sought throughout and hope to have given the reader a visceral sense of the appeal of the Bayesian paradigm as a statistical machine learning tool for data science. We complete by mentioning a few contributions of the Bayesian paradigm to latent variable modelling and kernel regression, with works like [13] which introduces a stable Radial Basis Function Selection via Mixture Modelling of the Sample Path, and [15] that extends it with a fully Bayesian Analysis of the Relevance Vector Machine With Extended Prior. Paper [12] proposes and develops a Bayesian computation of the Intrinsic Structure of Factor Analytic Models, drawing some of its elements from [16] where mixtures of Factor Analysers featuring Bayesian Estimation and Inference by Stochastic Simulation.…”
Section: Bayesian Paradigm In Ensemble Learning Methodsmentioning
confidence: 99%
“…We enforce smoothness in the basis matrix W and statistical sparseness (Hoyer [8]) in the weights matrix H by setting J 1 (W) and J 2 (H) equal to the square of their respective Frobenius norms (Pauca et al [13]). Using the penalties shown in (4) and (5), the standard NMF multiplicative updating equations for W and H (Seung and Lee [16]) are modified as follows:…”
Section: Constrained Nonnegative Matrix Factorizationmentioning
confidence: 99%
“…Tipping [18] demonstrated the possibility of deriving sparse representations in kernel regression via suitably specified Gamma hyperpriors on the independent precisions of a Gaussian prior on the weights of the kernel expansion. Fokoué [5] explored a modified version of Tipping [18] by using structured matrix of the form…”
Section: Toeplitz Nonnegative Matrix Factorizationmentioning
confidence: 99%
“…A worse situation is that the minority class is ignored by the classifiers, and all the samples are classified as the majority class. A hierarchical prior structure proposed by [9] is considered in this paper to modify the PRVM classification model for the imbalanced data problem. This prior reduces the dimensions of parameter space and builds the inner connection between hyperparameters.…”
Section: Introductionmentioning
confidence: 99%