DOI: 10.1007/978-3-540-74958-5_29
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Inference for Sparse Generalized Linear Models

Abstract: Abstract. We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
221
0

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 119 publications
(225 citation statements)
references
References 13 publications
4
221
0
Order By: Relevance
“…Most existing local variational approximation techniques are based on the convexity of the log-likelihood function or the log-prior (Bishop 2006;Jaakkola and Jordan 2000;Seeger 2008Seeger , 2009). We characterize these cases by using a general convex function φ and the Bregman divergence associated with φ.…”
Section: Local Variational Approximationmentioning
confidence: 99%
See 1 more Smart Citation
“…Most existing local variational approximation techniques are based on the convexity of the log-likelihood function or the log-prior (Bishop 2006;Jaakkola and Jordan 2000;Seeger 2008Seeger , 2009). We characterize these cases by using a general convex function φ and the Bregman divergence associated with φ.…”
Section: Local Variational Approximationmentioning
confidence: 99%
“…Furthermore, its asymptotic analysis has progressed in several statistical models Watanabe 2006, 2007;Hosino et al 2005;Watanabe et al 2009). The latter, also known as direct site bounding, has been applied to logistic regression (Jaakkola and Jordan 2000) and sparse linear models (Seeger 2008(Seeger , 2009). This approximation is generally characterized and described by using the Bregman divergence (Watanabe et al 2011).…”
mentioning
confidence: 99%
“…Similarly, it has been shown that in some settings adaptive methods can achieve the same error performance as non-adaptive methods using a smaller number of measurements. We refer the reader to [16,18], as well as [20,21] for extensive empirical results and more detailed performance comparisons of these procedures. A complete analysis of these adaptive sensing procedures would ideally also include an analytical performance evaluation.…”
Section: Quantifying Performancementioning
confidence: 99%
“…Following the Relevance Vector Machine (RVM) approach [8], different types of Kernel matrices can be examined, such as Gaussian Kernel. On the other hand, we are planning to examine also the possibility of using another type of more advantageous sparse priors, such as those presented at [10], [11] that have recently applied to general linear sparse models. The third target of our future work is to eliminate the dependence of the proposed regression mixture model on the initialization.…”
Section: Discussionmentioning
confidence: 99%
“…Sparse Bayesian regression is methodology that has received a lot of attention lately, see for example [8], [9], [10] and [11]. Enforcing sparsity is a fundamental machine learning regularization principle and lies behind some well known subjects such as feature selection.…”
Section: Introductionmentioning
confidence: 99%