2019
DOI: 10.1080/10485252.2019.1632308
|View full text |Cite
|
Sign up to set email alerts
|

On double-index dimension reduction for partially functional data

Abstract: In this note, we consider the situation where we have a functional predictor as well as some more traditional scalar predictors, which we call the partially functional problem. We propose a semiparametric model based on sufficient dimension reduction, and thus our main interest is in dimension reduction although prediction can be carried out at a second stage. We establish root-n consistency of the linear part of the estimator. Some Monte Carlo studies are carried out as proof of concept.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“… Estimation procedures: there are different methods and studies for estimation procedures of SFPLR model such as: fully automatic estimation procedure with the data-driven method and cross-validation for bandwidth selection of the smoothing parameter of nonparametric component [7], the asymptotic normality of linear part is studied [8], in a situations when the observation number of each subject is completely flexible and studying the convergence rate of the nonparametric part [9], the spline estimator of nonparametric part with studying their convergence rate [10], the two estimation procedure: 1-functional principal components regression (FPCR) and 2-functional ridge regression (FRR) based on the Tikhonov regularization [11][12][13], the new estimators for the parametric component called semiparametric least squares estimator (SLSE) [14], the nonparametric component approximated by a B-spline function [15], polynomial spline [16] and the slope function is estimated with the functional principal component basis [15,16], the k-nearest-neighbours (kNN) estimates with the local adaptive property that is better in the practice than kernel methods and some computations properties of this estimator [17][18][19], the Functional Semiparametric Additive Model via COmponent Selection and Smoothing Operator (FSAM-COSSO) in sparse setting [20], the sufficient dimension reduction methods such as sliced inverse regression (SIR) and sliced average variance estimation (SAVE) [21], the estimations are from the reproducing kernel Hilbert spaces (RKHS) [22], the frequentist and optimal model averaging [23], the latent group structure with K-means clustering [24], the joint asymptotic framework called joint Bahadur representation [25], the empirical likelihood estimation for non-functional high-dimension covariates [26], Sparse and penalized least-squares estimators [27] and the software for doing this analysis is available [28] .  Confidence Regions: Some papers have some sections for calculating confidence regions ,and we do not repeat them.…”
Section: Other Extensionsmentioning
confidence: 99%
“… Estimation procedures: there are different methods and studies for estimation procedures of SFPLR model such as: fully automatic estimation procedure with the data-driven method and cross-validation for bandwidth selection of the smoothing parameter of nonparametric component [7], the asymptotic normality of linear part is studied [8], in a situations when the observation number of each subject is completely flexible and studying the convergence rate of the nonparametric part [9], the spline estimator of nonparametric part with studying their convergence rate [10], the two estimation procedure: 1-functional principal components regression (FPCR) and 2-functional ridge regression (FRR) based on the Tikhonov regularization [11][12][13], the new estimators for the parametric component called semiparametric least squares estimator (SLSE) [14], the nonparametric component approximated by a B-spline function [15], polynomial spline [16] and the slope function is estimated with the functional principal component basis [15,16], the k-nearest-neighbours (kNN) estimates with the local adaptive property that is better in the practice than kernel methods and some computations properties of this estimator [17][18][19], the Functional Semiparametric Additive Model via COmponent Selection and Smoothing Operator (FSAM-COSSO) in sparse setting [20], the sufficient dimension reduction methods such as sliced inverse regression (SIR) and sliced average variance estimation (SAVE) [21], the estimations are from the reproducing kernel Hilbert spaces (RKHS) [22], the frequentist and optimal model averaging [23], the latent group structure with K-means clustering [24], the joint asymptotic framework called joint Bahadur representation [25], the empirical likelihood estimation for non-functional high-dimension covariates [26], Sparse and penalized least-squares estimators [27] and the software for doing this analysis is available [28] .  Confidence Regions: Some papers have some sections for calculating confidence regions ,and we do not repeat them.…”
Section: Other Extensionsmentioning
confidence: 99%
“…Also worth being mentioned is the contribution by Maity and Huang (2012) in which a similar model is studied with the specificity of using the functional covariates for stratifying. Finally, a mixture of partial linear and single index ideas is developed in Ding, Liu, Xu, and Zhang (2017), Wang, Feng, and Chen (2016), Yang, Lin, and Lian (2019), and Yu, Du, and Zhang (2020). By‐product: partial linear model for single functional covariate .…”
Section: Dimension Reduction For Multifunctional Covariatementioning
confidence: 99%