2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP) 2018
DOI: 10.1109/mlsp.2018.8517036
|View full text |Cite
|
Sign up to set email alerts
|

Balson: Bayesian Least Squares Optimization With Nonnegative L1-Norm Constraint

Abstract: A Bayesian approach termed BAyesian Least Squares Optimization with Nonnegative L 1 -norm constraint (BALSON) is proposed. The error distribution of data fitting is described by Gaussian likelihood. The parameter distribution is assumed to be a Dirichlet distribution. With the Bayes rule, searching for the optimal parameters is equivalent to finding the mode of the posterior distribution. In order to explicitly characterize the nonnegative L 1 -norm constraint of the parameters, we further approximate the true… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 31 publications
(50 reference statements)
0
3
0
Order By: Relevance
“…Supposing a group of 𝑁 vector samples {𝜇 (1) , • • • , 𝜇 (𝑁 ) } that follows the Dirichlet distribution 𝐷𝑖𝑟 (𝛼), the mean and variance of the Dirichlet distribution can be matched by those of the samples [28] as…”
Section: Methodology 31 Dirichlet Distributionmentioning
confidence: 99%
“…Supposing a group of 𝑁 vector samples {𝜇 (1) , • • • , 𝜇 (𝑁 ) } that follows the Dirichlet distribution 𝐷𝑖𝑟 (𝛼), the mean and variance of the Dirichlet distribution can be matched by those of the samples [28] as…”
Section: Methodology 31 Dirichlet Distributionmentioning
confidence: 99%
“…Supposing a group of N vector samples {µ (1) , • • • , µ (N ) } that follows the Dirichlet distribution Dir(α), the mean and variance of the Dirichlet distribution can be matched by those of the samples [13] as…”
Section: Dirichlet Distributionmentioning
confidence: 99%
“…In regression modelling for high dimensional data, redundancy of covariates is generally addressed by regularization and variable-selection strategies (Tibshirani, 2011;Williams, 1995;Xie et al, 2018). Accordingly, in the context of FMR models, it is crucial to retain only the most significant covariates in each subpopulation to avoid overfitting and to strengthen model interpretability.…”
Section: Introductionmentioning
confidence: 99%