2013
DOI: 10.1214/13-aos1163
|View full text |Cite
|
Sign up to set email alerts
|

Optimal classification in sparse Gaussian graphic model

Abstract: Consider a two-class classification problem where the number of features is much larger than the sample size. The features are masked by Gaussian noise with mean zero and covariance matrix $\Sigma$, where the precision matrix $\Omega=\Sigma^{-1}$ is unknown but is presumably sparse. The useful features, also unknown, are sparse and each contributes weakly (i.e., rare and weak) to the classification decision. By obtaining a reasonably good estimate of $\Omega$, we formulate the setting as a linear regression mo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
75
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(76 citation statements)
references
References 38 publications
(90 reference statements)
1
75
0
Order By: Relevance
“…It is worth emphasizing three notable differences between the proposed T ME and the well-known T skato (Lee, Wu & Lin, 2012). First, the proposed framework aggregates evidence across J variants based on , a precision matrix-based transformation of the score vector, that can increase power for detecting sparse alternatives (Fan, Jin & Yao, 2013; Cai, Liu & Xia, 2014). Secondly, unlike T skato , T ME naturally aggregates the information from the linear and quadratic types of tests, based on a regression model without searching for the ‘optimal’ weight.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is worth emphasizing three notable differences between the proposed T ME and the well-known T skato (Lee, Wu & Lin, 2012). First, the proposed framework aggregates evidence across J variants based on , a precision matrix-based transformation of the score vector, that can increase power for detecting sparse alternatives (Fan, Jin & Yao, 2013; Cai, Liu & Xia, 2014). Secondly, unlike T skato , T ME naturally aggregates the information from the linear and quadratic types of tests, based on a regression model without searching for the ‘optimal’ weight.…”
Section: Discussionmentioning
confidence: 99%
“…To this end, we propose a flexible and unifying linear mixed-effect regression model that requires only variant-specific summary statistics, and we show that earlier methods based on individual-level data are special cases of the proposed testing framework. The set-based association test statistic derived from the regression model inherently transforms the variant-specific summary statistics using the precision matrix to improve power for detecting sparse alternatives; see Fan, Jin & Yao (2013) and Cai, Liu & Xia (2014) for the use of precision matrix in other high-dimension analytical settings. Furthermore, the proposed method can incorporate additional variant-specific information as a covariate, e.g.…”
mentioning
confidence: 99%
“…A popular alternative is to use some form of regularized estimation. A number of strategies for high‐dimensional discriminant analysis have been developed in this vein, though mostly under the assumption that the Xij are normal rather than binomial random variables (Fan et al., ; Greenshtein and Park, ; Cai and Liu, ; Fan et al., ; Mai et al., ; Fan et al., ; Han et al., ; Dicker and Zhao, ).…”
Section: Methodsmentioning
confidence: 99%
“…Fortunately, in many problems, various quantities in the LDA can be assumed sparse; See, for example, Witten & Tibshirani (2011);Shao et al (2011);Cai & Liu (2011);Fan, Feng & Tong (2012); Mai, Zou & Yuan (2012), and Mai & Zou (2013) for a summary of selected sparse LDA methods. Further studies along this line can be found in Fan, Jin & Yao (2013) and Hao, Dong & Fan (2015). More recently, quadratic discriminant analysis has attracted increasing attention where the population covariance matrices are assumed static but different.…”
Section: Existing Workmentioning
confidence: 99%