2010
DOI: 10.1198/jasa.2010.tm09666
|View full text |Cite
|
Sign up to set email alerts
|

Dimension Reduction in Regressions Through Cumulative Slicing Estimation

Abstract: In this paper we offer a complete methodology of cumulative slicing estimation to sufficient dimension reduction. In parallel to the classical slicing estimation, we develop three methods that are termed, respectively, as cumulative mean estimation, cumulative variance estimation, and cumulative directional regression. The strong consistency for p = O(n 1/2 / log n) and the asymptotic normality for p = o(n 1/2 ) are established, where p is the dimension of the predictors and n is sample size. Such asymptotic r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
124
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 136 publications
(124 citation statements)
references
References 21 publications
0
124
0
Order By: Relevance
“…For instance, it is straightforward to build multivariate regularized SIR approaches from Bernard-Michel et al (2009b), multivariate SIR α approaches from Gannoun and Saracco (2003) or multivariate kernel SIR methods from Wu (2008) and to obtain the associated clustering step. To avoid the choice of the number H of slices, one may also consider building a multivariate version of the CUME procedure from Zhu et al (2010a).…”
Section: Discussionmentioning
confidence: 99%
“…For instance, it is straightforward to build multivariate regularized SIR approaches from Bernard-Michel et al (2009b), multivariate SIR α approaches from Gannoun and Saracco (2003) or multivariate kernel SIR methods from Wu (2008) and to obtain the associated clustering step. To avoid the choice of the number H of slices, one may also consider building a multivariate version of the CUME procedure from Zhu et al (2010a).…”
Section: Discussionmentioning
confidence: 99%
“…Recently, Jiang et al (2014) proposed an inverse regression method for sparse functional data by estimating the inverse conditional mean functions with a two-dimensional smoother that requires considerable computation. Inspired by cumulative slicing estimation (CUME) for multivariate data (Zhu et al 2010), Yao et al (2015) proposed to borrow information across subjects via a one-dimensional smoother, named the functional cumulative slicing (FCS), which is closely related to the proposed method in this paper. The EDR methods are intended for continuous response and have been rarely used for classification problems due to few distinct response values.…”
Section: Effective Dimension Reductionmentioning
confidence: 98%
“…Thus, one major advantage of EDR methods is "link-free" (Duan and Li 1991). Pioneered by Li (1991) that proposed the sliced inverse regression (SIR) using the information concerning the inverse conditional mean E(X |Y ), Cook and Weisberg (1991) considered the inverse variance estimation utilizing the information of var(X |Y ), Li (1992) dealt with the Hessian matrix of the regression curve, Chiaromonte et al (2002) modified sliced inverse regression for categorical predictors, Li and Wang (2007) worked with empirical directions, and Zhu et al (2010) proposed cumulative slicing estimation to improve upon SIR.…”
Section: Effective Dimension Reductionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are a number of proposals available in the literature for this purpose. Examples include sliced inverse regression (SIR) of Li (1991), sliced average variance estimation (SAVE) of Cook & Weisberg (1991), minimum average variance estimation (MAVE) of Xia et al (2002), contour regression (CR) of Li et al (2005), directional regression (DR) of Li & Wang (2007), discretization-expectation estimation (DEE) of Zhu et al (2010a), and the average partial mean estimation (APME) of Zhu et al (2010b). When there are measurement errors, the above methods need a modification for consistently estimating the matrix B up to a q × q orthonormal matrix C.…”
Section: Introductionmentioning
confidence: 99%