2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP) 2018
DOI: 10.1109/mlsp.2018.8516963
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Bayesian Binary Logistic Regression Using the Split-and-Augmented Gibbs Sampler

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(15 citation statements)
references
References 30 publications
0
14
0
1
Order By: Relevance
“…results of experiments aimed at comparing the proposed methodology with that of current state-of-the-art (optimization and Bayesian) methods for the inverse problems discussed in Section IV. All the results presented in this section have been obtained using MATLAB, on a computer equipped with an Intel Xeon 3.70 GHz processor, with 16.0 GB of RAM, and running Windows 7.Other examples of the proposed approach on machine learning problems can be found in[30],[49].A. Deconvolution with a smooth prior 1) Problem considered: The Gaussian sampling problem introduced in Section IV-B is considered.…”
mentioning
confidence: 99%
“…results of experiments aimed at comparing the proposed methodology with that of current state-of-the-art (optimization and Bayesian) methods for the inverse problems discussed in Section IV. All the results presented in this section have been obtained using MATLAB, on a computer equipped with an Intel Xeon 3.70 GHz processor, with 16.0 GB of RAM, and running Windows 7.Other examples of the proposed approach on machine learning problems can be found in[30],[49].A. Deconvolution with a smooth prior 1) Problem considered: The Gaussian sampling problem introduced in Section IV-B is considered.…”
mentioning
confidence: 99%
“…convexity, differentiability) of the potential functions f1 and f2, the coupling function can be adaptively chosen. For instance, [20,22] consider ϕρ(x) = (2ρ 2 ) −1 ∥x − z∥ 2 2 for its gradient Lipschitz, differentiability and convexity properties while [21] invokes conjugacy arguments to choose ϕρ. For a detailed discussion on the Gibbs sampling procedure, we refer the reader to [20, Section III.A].…”
Section: Gibbs Samplermentioning
confidence: 99%
“…Similarly to the performance results shown in Table 1 for the first experiment, SGS presents similar reconstruction performances as ADMM or P-MYULA and leads to an improvement in computational time. Some additional illustrations of the proposed approach can be found in [20,22,21] where it is shown that the derived Gibbs sampler is more efficient than state-of-the-art MCMC algorithms. Moreover, it can be distributed over multiple nodes with a well-chosen variable splitting strategy.…”
Section: Image Deblurring In the Wavelet Domainmentioning
confidence: 99%
“…Our approach was initially presented in Rendell et al (2018). A proposal to use essentially the same framework in a serial context has been independently and contemporaneously published by Vono, Dobigeon, and Chainais (2019), who construct a Gibbs sampler via a "variable splitting" approach. Rather than distributing the computation, the authors focus on the setting where b = 1 to obtain a relaxation of the original simulation problem.…”
Section: Introductionmentioning
confidence: 99%