Proceedings of the 2012 SIAM International Conference on Data Mining 2012
DOI: 10.1137/1.9781611972825.19
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Multi-task Sparse Learning with an Application to fMRI Study

Abstract: In this paper, we consider the multi-task sparse learning problem under the assumption that the dimensionality diverges with the sample size. The traditional l 1 /l 2 multi-task lasso does not enjoy the oracle property unless a rather strong condition is enforced. Inspired by adaptive lasso, we propose a multi-stage procedure, adaptive multi-task lasso, to simultaneously conduct model estimation and variable selection across different tasks. Motivated by adaptive elastic-net, we further propose the adaptive mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…In particular, an ℓ 1 -norm penalization is imposed to induce network sparsity (Achard and Bullmore, 2007; Chen et al, 2012; Friedman et al, 2008) and a fused regularization is imposed to preserve the temporal smoothness by encouraging Θ ( k ) to have similar topology and correlations strengths to its adjoining networks (Tibshirani et al, 2005). FMGL was implemented using an in-house software.…”
Section: Methodsmentioning
confidence: 99%
“…In particular, an ℓ 1 -norm penalization is imposed to induce network sparsity (Achard and Bullmore, 2007; Chen et al, 2012; Friedman et al, 2008) and a fused regularization is imposed to preserve the temporal smoothness by encouraging Θ ( k ) to have similar topology and correlations strengths to its adjoining networks (Tibshirani et al, 2005). FMGL was implemented using an in-house software.…”
Section: Methodsmentioning
confidence: 99%
“…We introduced multi-task lasso learning (MTL) [23,24] in which many learning tasks are improved at the same time. This method produces accurate predictions and is highly effective.…”
Section: Multitask Visual Space Learningmentioning
confidence: 99%
“…By simultaneously learning all tasks, MTL has shown great performance improvement in several related applications. Many existing MTL methods [2,8,9,15,17,20,26,46,51,52] are formulated as the regularized optimization problems with an empirical loss term of the training data plus a regularization term. Their contributions usually focus on designing meaningful regularization terms in order to capture the underlying commonality among tasks.…”
Section: Related Workmentioning
confidence: 99%