2014
DOI: 10.1016/j.jmva.2014.08.002
|View full text |Cite
|
Sign up to set email alerts
|

Lasso penalized model selection criteria for high-dimensional multivariate linear regression analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…When p n is greater than n, we cannot directly calculate S −1 and thus GC p . Therefore, we need different approaches to estimate a covariance matrix Σ such as sparse or ridge estimation (e.g., Yamamura, Yanagihara and Srivastava, 2010;Katayama and Imori, 2014;Fujikoshi and Sakurai, 2016). If we can estimate Σ accurately via these procedures, ALE and AME can be established by using it in place of S. It should also be noted that our proof depends on the assumption that the response matrix follows a Gaussian distribution.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…When p n is greater than n, we cannot directly calculate S −1 and thus GC p . Therefore, we need different approaches to estimate a covariance matrix Σ such as sparse or ridge estimation (e.g., Yamamura, Yanagihara and Srivastava, 2010;Katayama and Imori, 2014;Fujikoshi and Sakurai, 2016). If we can estimate Σ accurately via these procedures, ALE and AME can be established by using it in place of S. It should also be noted that our proof depends on the assumption that the response matrix follows a Gaussian distribution.…”
Section: Discussionmentioning
confidence: 99%
“…The reason why this inversion arises may be that a difference in risks between two over-specified models (i.e., models including the true model) diverges with n and p n tending to infinity, and thus penalty terms of C p and AIC are moderate but that of BIC is too strong. In addition to these studies, model selection criteria in highdimensional data contexts and their consistency properties have been vigorously studied in various models and situations (e.g., Katayama and Imori, 2014;Imori and von Rosen, 2015;Yanagihara, 2015;Fujikoshi and Sakurai, 2016;Bai, Choi and Fujikoshi, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…It is straightforward to show that this is weaker than assuming lim inf n!y n À1 l min ðX 0 XÞ > 0, where l min ðAÞ is the minimum eigenvalue of a symmetric matrix A. The assumption for the true regression coe‰cients is essentially used in Katayama and Imori (2014). For example, when all the elements of each y l are nonzero constants not converging to 0, the assumption for the true regression coe‰cients holds.…”
Section: Assumptions For Consistencymentioning
confidence: 99%
“…Relaxing the normality assumption, Yanagihara (2019) focused on conditions for consistency of the GIC and GC p criterion under the hybridmoderate-high-dimensional asymptotic framework. As such, therein, p does not exceed n. On the other hand, in the context where p > n, Katayama and Imori (2014) considered variable selection criteria based on a lasso-type estimation for the inverse of the covariance matrix. Under the normality assumption, they showed that the criteria are consistent in a restricted-ultrahigh-dimensional asymptotic framework such that both n and p go to infinity but p may exceed n and log p=n !…”
Section: Introductionmentioning
confidence: 99%
“…GIC is a popular procedure in choosing the final model in variable selection. In the literature there are many papers investigating its properties, for instance [18,32] in linear models, [7,14] in GLM, [16] in multivariate linear regression, [38] for SVM and [17] for general convex loss functions. GIC is often applied to pathwise algorithms under a common assumption that the true model is on this path.…”
Section: Introductionmentioning
confidence: 99%