2018
DOI: 10.1016/j.sigpro.2018.04.021
|View full text |Cite
|
Sign up to set email alerts
|

Hyperparameter selection for group-sparse regression: A probabilistic approach

Abstract: This work analyzes the effects on support recovery for different choices of the hyper-or regularization parameter in LASSO-like sparse and group-sparse regression problems. The hyperparameter implicitly selects the model order of the solution, and is typically set using cross-validation (CV). This may be computationally prohibitive for large-scale problems, and also often overestimates the model order, as CV optimizes for prediction error rather than support recovery. In this work, we propose a probabilistic a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 36 publications
0
3
0
Order By: Relevance
“…Since the regularization parameter λ present in ( 22) is here heuristically set as explained in Sec. 4 and depends on setup conditions, such as the noise level and the number of target sources [28], its chosen value may be suboptimal in the simulations considered. Further tuning could be employed, for example, to investigate the proposed method's performance under higher levels of reverberation.…”
Section: Resultsmentioning
confidence: 99%
“…Since the regularization parameter λ present in ( 22) is here heuristically set as explained in Sec. 4 and depends on setup conditions, such as the noise level and the number of target sources [28], its chosen value may be suboptimal in the simulations considered. Further tuning could be employed, for example, to investigate the proposed method's performance under higher levels of reverberation.…”
Section: Resultsmentioning
confidence: 99%
“…To select variables inside a group (instead of choosing all in a group as in group lasso), SGL 55,58 has an additional L 1 ‐norm penalty (See Equation for details). The resulting model of using both L 1 ‐norm and group norm is better known as sparse group learning, 53,58–60 with SGL using the least square loss (see Equation for details).…”
Section: Data Collection and Methodologymentioning
confidence: 99%
“…It is often a non-trivial task to select such hyperparameters properly, although there is some work indicating that one may formulate selection algorithms for this (see e.g. [7], and the references therein).…”
Section: Introductionmentioning
confidence: 99%