2007
DOI: 10.1016/j.csda.2007.01.025
|View full text |Cite
|
Sign up to set email alerts
|

Input selection and shrinkage in multiresponse linear regression

Abstract: The regression problem of modeling several response variables using the same set of input variables is considered. The model is linearly parameterized and the parameters are estimated by minimizing the error sum of squares subject to a sparsity constraint. The constraint has the effect of eliminating useless inputs and constraining the parameters of the remaining inputs in the model. Two algorithms for solving the resulting convex cone programming problem are proposed. The first algorithm gives a pointwise sol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0
1

Year Published

2007
2007
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 80 publications
(48 citation statements)
references
References 27 publications
0
47
0
1
Order By: Relevance
“…This norm penalizes the sum of the l 2 norms of each row of W (Yuan & Lin, 2006;Meier et al, 2006;Similä & Tikka, 2007;Park & Hastie, 2006;Obozinski et al, 2006;Argyriou et al, 2007;Schmidt et al, 2009). 3. A projected gradient method for l 1,∞ regularization…”
Section: Previous Workmentioning
confidence: 99%
“…This norm penalizes the sum of the l 2 norms of each row of W (Yuan & Lin, 2006;Meier et al, 2006;Similä & Tikka, 2007;Park & Hastie, 2006;Obozinski et al, 2006;Argyriou et al, 2007;Schmidt et al, 2009). 3. A projected gradient method for l 1,∞ regularization…”
Section: Previous Workmentioning
confidence: 99%
“…Inducing sparsity as a consequence of optimization might, therefore, prove beneficial and the adoption of a multivariable generalization of, say, the LASSO algorithm [e.g. 44] or the method based on surrogate optimization described in [18]. This would have the advantage of performing regularization directly but would add to the cross-validation burden to select the degree of regularization required.…”
Section: Resultsmentioning
confidence: 99%
“…Spyromitros-Xioufis et al 38 analysed how several multi-label approaches, such as the binary relevance, stacked generalization and classifier chains, are straightforward of applying in multi-target regression contexts. As for algorithm adaptation category, a large number of methods have been proposed, such as statistical methods 37 , support vector machines 42,17 , kernel approaches 3 , multi-target regression trees 25 , and rule-based methods 1 .…”
Section: Multi-target Regression Problemmentioning
confidence: 99%