2020
DOI: 10.48550/arxiv.2012.00460
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Functional Linear Regression with Mixed Predictors

Abstract: We consider a general functional linear regression model, allowing for both functional and high-dimensional vector covariates. Furthermore, the proposed model can accommodate discretized observations of functional variables and different reproducing kernel Hilbert spaces (RKHS) for the functional regression coefficients. Based on this general setting, we propose a penalized least squares approach in RKHS, where the penalties enforce both smoothness and sparsity on the functional estimators. We also show that t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 57 publications
1
4
0
Order By: Relevance
“…Our results achieve the same problem-dependent scaling as in canonical finite dimensional linear regression [Abbasi-Yadkori et al, 2011b,a, Peña et al, 2008, Hsu et al, 2012b. On the other hand, our results specialize the functional regression setting of [Benatia et al, 2017, Wang et al, 2020 to CDF estimation, where minimal assumptions are made on the data generation process. Moreover, we also derive Ωp a d{nq information theoretic lower bounds for functional linear regression of CDFs.…”
Section: Introductionsupporting
confidence: 72%
“…Our results achieve the same problem-dependent scaling as in canonical finite dimensional linear regression [Abbasi-Yadkori et al, 2011b,a, Peña et al, 2008, Hsu et al, 2012b. On the other hand, our results specialize the functional regression setting of [Benatia et al, 2017, Wang et al, 2020 to CDF estimation, where minimal assumptions are made on the data generation process. Moreover, we also derive Ωp a d{nq information theoretic lower bounds for functional linear regression of CDFs.…”
Section: Introductionsupporting
confidence: 72%
“…Here, λ min /σ √ p 1 ∨ p 2 is required for consistent estimation of the singular vectors a l , b l (also see Cai et al (2019)); while λ min /τ n −α/(2α+1) ∨ (p 1 ∨ p 2 )/n is required for consistent estimation/approximation of the singular functions ξ l and it yields an interesting phase transition: when p 1 ∨ p 2 n 1/(2α+1) , the bound is dominated by the non-parametric rate n −α/(2α+1) ; when p 1 ∨ p 2 n 1/(2α+1) , the bound is dominated by the parametric rate (p 1 ∨ p 2 )/n. We note that the phase transition between parametric and non-parametric rates commonly appear in the study of high-dimensional functional regression with sparsity, e.g., Wang et al (2020b). To the best of our knowledge, we are the first to establish such the phenomenon in the low-rank-based functional data analyses.…”
Section: Local Convergencementioning
confidence: 77%
“…For each l ∈ [r], we sample a l , b l uniformly from the unit spheres S p 1 −1 , S p 2 −1 and generate ξ l from orthonormal basis functions {u i (s)} 10 i=1 ⊂ L 2 ([0, 1]). Following Yuan and Cai (2010); Wang et al (2020b), we set u 1 (s) = 1 and u i (s) = √ 2 cos ((i − 1)πs) for i = 2, . .…”
Section: Simulation Studiesmentioning
confidence: 99%
“…Zou et al, 2006), and derived an upper bound in the form of the sum of that on lasso and ridge estimators. In the high-dimensional functional data analysis literature, with potentially many different functional covariates, Wang et al (2020b) showed that the prediction upper bound is the sum of an upper bound related with the smoothness penalty and an upper bound related with the high-dimensionality.…”
Section: A Penalised Estimatormentioning
confidence: 99%