2008
DOI: 10.21236/ada487461
|View full text |Cite
|
Sign up to set email alerts
|

Union Support Recovery in High-Dimensional Multivariate Regression

Abstract: Standard Form 298 (Rev. 8-98)Prescribed by ANSI Std. Z39.18Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Once released in the environment, either through detonation, casing breakage, or by slow leaks from unexploded ordnance (UXO), nitrogenous energetic compounds (NEC, su… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 72 publications
0
9
0
Order By: Relevance
“…In multi-task settings, we compare Multi-SConES to multi-task Lasso [23] and multi-task Grace. Since Grace is for single-task feature selection, we construct an artificial dataset including a given network using the reformulation in Lemma 1 of [18], followed by applying multi-task Lasso to the dataset (Supplementary Note C).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In multi-task settings, we compare Multi-SConES to multi-task Lasso [23] and multi-task Grace. Since Grace is for single-task feature selection, we construct an artificial dataset including a given network using the reformulation in Lemma 1 of [18], followed by applying multi-task Lasso to the dataset (Supplementary Note C).…”
Section: Methodsmentioning
confidence: 99%
“…Several multi-task versions of Lasso, in which related tasks are coupled with each other, have been proposed as well: Multi-Task Lasso [23] uses an ℓ 2 -norm on each weight across all tasks to reward solutions, where the same features are selected for all tasks. Graph-Guided Fused Lasso [16] extends this idea by coupling the weight vectors of correlated tasks: the more correlated two tasks are, the more solutions in which they have similar weight vectors are rewarded.…”
Section: Related Workmentioning
confidence: 99%
“…A variable selection through the candidate models in (1.1) is equivalent to extracting the subset of relevant explanatory variables that are active in at least one response variable, which is referred to as the support union problem (see Obozinski et al (2008Obozinski et al ( , 2011), and a result from the variable selection helps to provide a faster and more cost-effective model, and a better understanding of the underlying process that generated the data.…”
Section: Introductionmentioning
confidence: 99%
“…The task of conducting variable selection can always be achieved via learning the sparsity pattern of parameters. In the multi-task learning setting, it is often assumed that parameters for different tasks share the same sparsity pattern [2,18]. To achieve such an effect, a popular approach is to adopt a joint sparsity regularization to encourage groupwise sparsity across multiple tasks.…”
Section: Introductionmentioning
confidence: 99%