2018
DOI: 10.1109/tsp.2018.2824286
|View full text |Cite
|
Sign up to set email alerts
|

Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

Abstract: In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the Rectified Gaussian Scale Mixture (R-GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Studentt distributions with a proper choice of the mixing density. We utilize the hierarchical … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 86 publications
0
13
0
Order By: Relevance
“…Unlike NNOMP or NNLASSO, there is no explicit non-negativity constraint imposed in the basic SBL algorithm. In our implementation, the non-negativity is simply imposed at the end of the optimization by setting to 0 any negative-valued elements in µ, though more principled, albeit more computationally heavy, approaches such as [33] can be adopted.…”
Section: ) the Non-negative Lasso (Nnlasso)mentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike NNOMP or NNLASSO, there is no explicit non-negativity constraint imposed in the basic SBL algorithm. In our implementation, the non-negativity is simply imposed at the end of the optimization by setting to 0 any negative-valued elements in µ, though more principled, albeit more computationally heavy, approaches such as [33] can be adopted.…”
Section: ) the Non-negative Lasso (Nnlasso)mentioning
confidence: 99%
“…They also give lower bounds on the number of rows m of left-regular bipartite graph matrices whose column weight 4 is more than 2, for them to have high girth and consequently satisfy RNSP of order k, given k and n [23,Eqn. 32,33]. Given k and n, these lower bounds are minimized for graphs of girth 6 and 8, and the bounds are, respectively, m ≥ k √ n and m ≥ k 3/2 √ n ( [23, Eqn.…”
Section: ) Optimality Of Girth 6 Matricesmentioning
confidence: 99%
“…, x n ] T . A smarter yet more elaborate approach to impose the non-negativity constraint as part of the optimization problem is to use algorithms such as Rectified SBL [12] that assume the coordinates in x follow a different distribution than the Gaussian distribution used in SBL.…”
Section: Sparse Bayesian Learning (Sbl)mentioning
confidence: 99%
“…To complete the model, a prior on the columns of H, which are assumed to be independent and identically distributed, must be specified. This work considers separable priors of the form p H (:,j) = n i=1 p H (i,j) , where p H (i,j) has a scale mixture representation [37,38]: Separable priors are considered because, in the absence of prior knowledge, it is reasonable to assume independence amongst the coefficients of H. The case where dependencies amongst the coefficients exist is considered in Section 5.…”
Section: Sparse Non-negative Least Squares Framework Specificationmentioning
confidence: 99%
“…One reason for the use of heavy-tailed priors is that they are able to model both the sparsity and large non-zero entries of H. The RPE encompasses many rectified distributions of interest. For instance, the RPE reduces to a Rectified Gaussian by setting z = 2, which is a popular prior for modeling non-negative data [47,38] and results in a Rectified Gaussian Scale Mixture in (14). Setting z = 1 corresponds to an Exponential distribution and leads to an Exponential Scale Mixture in (14) [48].…”
Section: Sparse Non-negative Least Squares Framework Specificationmentioning
confidence: 99%