2013
DOI: 10.1093/biomet/ast041
|View full text |Cite
|
Sign up to set email alerts
|

Better subset regression

Abstract: To find efficient screening methods for high dimensional linear regression models, this paper studies the relationship between model fitting and screening performance.Under a sparsity assumption, we show that a subset that includes the true submodel always yields smaller residual sum of squares (i.e., has better model fitting) than all that do not in a general asymptotic setting. This indicates that, for screening important variables, we could follow a "better fitting, better screening" rule, i.e., pick a "bet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(18 citation statements)
references
References 34 publications
0
18
0
Order By: Relevance
“…A sub-optimal solution to (10) can still include the true submodel A 0 (Xiong 2014). The famous 1 -regularized method (lasso) (Tibshirani 1996) is a convex approximation to (10).…”
Section: Asymptotic Validity Of Linear Screeningmentioning
confidence: 99%
See 1 more Smart Citation
“…A sub-optimal solution to (10) can still include the true submodel A 0 (Xiong 2014). The famous 1 -regularized method (lasso) (Tibshirani 1996) is a convex approximation to (10).…”
Section: Asymptotic Validity Of Linear Screeningmentioning
confidence: 99%
“…It seems that models as simple as possible should be first considered. Therefore, we adopt the linear regression model to the data from high-dimensional computer experiments, and use the 0 -screening principle for the linear model (Xiong 2014;Xu and Chen 2014) to screen the active input variables of the nonlinear simulator. It can be seen that the idea of this linear screening method is similar to that of the regression method in global sensitivity analysis for computer experiments (Santner, Williams, and Notz 2018), which uses regression coefficients under the linear regression model as sensitivity indices for the input variables.…”
Section: Introductionmentioning
confidence: 99%
“…Fairly recently, Xiong [7] introduced the better-fitting betterscreening rule when discussing variable screening in highdimensional linear models. This rule tells us that, under reasonable conditions, a subset with smaller residual sums of squares possesses better asymptotic screening properties, i.e., more likely to include the true submodel asymptotically.…”
Section: Mathematical Problems In Engineeringmentioning
confidence: 99%
“…Theorem 5 (weak comparison theorem). Suppose that { } weakly separates A from B. Denote the set of probability one where (7) holds by ( , ) and write = ∩ ∈A, ∈B ( , ). For all ∈ N, and are statistics valued in D satisfying…”
Section: Theorem 4 Suppose That { } Strongly Separatesmentioning
confidence: 99%
“…In the second stage, the coefficients in the screened M−submodel can be estimated by a penalized least squares method. In this paper we only focus on the traditional n > p case, which can be viewed as a study on the second stage when p > n. For studies on screening methods in the first stage, we refer the reader to [4,5,10,12,17,20], among others.…”
Section: Introductionmentioning
confidence: 99%