2015
DOI: 10.5705/ss.2013.113
|View full text |Cite
|
Sign up to set email alerts
|

Direction Estimation in Single-Index Regressions via Hilbert-Schmidt Independence Criterion

Abstract: In this article, we use a Hilbert-Schmidt Independence Criterion to propose a new method for estimating directions in single-index models. This approach enjoys a model free property and requires no link function to be smoothed or estimated. Further, we propose a permutation test to check whether the estimated single-index is sufficient. The sampling distribution of our estimator is established. Finite sample performance of proposed estimates is examined through simulation studies and compared with two well-est… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…In this article, following the work of Zhang and Yin (2015) and Tan et al (2018b), we develop a new approach using Hilbert-Schmidt Independence Criterion (HSIC) for singleindex models. The proposed method can handle the scenario p > n and require the weakest conditions among the existing high-dimensional sparse SDR methods.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…In this article, following the work of Zhang and Yin (2015) and Tan et al (2018b), we develop a new approach using Hilbert-Schmidt Independence Criterion (HSIC) for singleindex models. The proposed method can handle the scenario p > n and require the weakest conditions among the existing high-dimensional sparse SDR methods.…”
Section: Introductionmentioning
confidence: 99%
“…To sum up, the main contributions of our work are as follows. First, our method extends the HSIC-based single-index regression (Zhang and Yin, 2015) to a sufficient variable selection method. Since it does not involve the inversion of the sample covariance matrix, it can naturally handle a large p small n situation.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Let z i = ( z i 1 , ⋯, z ip z ) T be a p z × 1 vector of such key features. Finally, model (1) may be approximated by yi=ffalse(zifalse)+σyεiy=ffalse(zi1,,zitalicipzfalse)+σyεiy.When z i = Γ z x i , in which Γ z is a p z × p x matrix, model (3) reduces to the well-known semi parametric index model in the dimensional reduction literature (Li, 1991; Cook and Ni, 2005; Zhang and Yin, 2014; Yang, 2016). Most existing dimension reduction methods focus on the scenario when p x is smaller than n .…”
Section: Introductionmentioning
confidence: 99%