2010
DOI: 10.1214/09-aoas271
|View full text |Cite
|
Sign up to set email alerts
|

Regularized multivariate regression for identifying master predictors with application to integrative genomics study of breast cancer

Abstract: In this paper, we propose a new method remMap - REgularized Multivariate regression for identifying MAster Predictors - for fitting multivariate response regression models under the high-dimension-low-sample-size setting. remMap is motivated by investigating the regulatory relationships among different biological molecules based on multiple types of high dimensional genomic data. Particularly, we are interested in studying the influence of DNA copy number alterations on RNA transcript levels. For this purpose,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
246
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 227 publications
(249 citation statements)
references
References 58 publications
3
246
0
Order By: Relevance
“…For example, the 1 -norm regularizer (e.g., [9]) leads to the element sparsity; the 2,1 -norm regularizer (e.g., [36]) for the group sparsity and the mixed-norm regularizer (concatenating a 1 -norm regularizer with a 2,1 -norm regularizer, e.g., [22]) for the mixed sparsity. To generate the sparsity, the 1 -norm regularizer makes each code as a singleton, then generates four codes in the first column of Fig.2.…”
Section: Sparse Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, the 1 -norm regularizer (e.g., [9]) leads to the element sparsity; the 2,1 -norm regularizer (e.g., [36]) for the group sparsity and the mixed-norm regularizer (concatenating a 1 -norm regularizer with a 2,1 -norm regularizer, e.g., [22]) for the mixed sparsity. To generate the sparsity, the 1 -norm regularizer makes each code as a singleton, then generates four codes in the first column of Fig.2.…”
Section: Sparse Learningmentioning
confidence: 99%
“…However, it still generates sparse codes one sample once. The mixed sparsity has been explained (e.g., [22]) as first generating the group sparsity for each sample, e.g., sparsity in the second red box (i.e., group) in the first column of Fig.2. (c), and then generating the element sparsity in the dense (i.e., non-sparse) groups, e.g., the second element in the first column of Fig.2.(c).…”
Section: Sparse Learningmentioning
confidence: 99%
“…Generating B in this way, we expect (1−s 2 )p predictors to be irrelevant for all q responses, and each predictor to be relevant for s 1 q of all the response variables. An n × p predictor matrix X is also generated with rows drawn independently from N (0, Σ X ), where (Σ X ) ij = 0.7 |i−j| , as in Yuan and Lin (2007) and Peng et al (2010). We consider the AR(1) covariance model as the scale matrix of the errors with (Σ) ij = ρ |i−j| .…”
Section: Model Designmentioning
confidence: 99%
“…In these situations, the traditional estimators for B and Σ with pq and q(q + 1)/2 parameters, respectively, have rather poor performances and are not suitable for prediction and other applications, so that one must seek workable alternatives by regularizing these parameters. Traditionally, this has been done individually by focusing on regularizing B or Σ alone under the headings of regularized multivariate regression (Peng et al, 2010) or regularized covariance estimation , respectively.…”
Section: Introductionmentioning
confidence: 99%
“…Correlation gives insight into the strength of the relationship between CNA and relative mRNA levels. Similarly, regression analysis finds its place in integrative genomic analysis (Pollack et al, 2002;Stranger et al, 2007;Menezes et al, 2009;Peng et al, 2010;Asimit et al, 2011). Some authors choose to combine the two-step analysis with an assessment of relationship strength (Pollack et al, 2002;Heidenblad et al, 2005;van Wieringen et al, 2006;Tsukamoto et al, 2008;Bicciato et al, 2009).…”
mentioning
confidence: 99%