2015
DOI: 10.1111/insr.12116
|View full text |Cite
|
Sign up to set email alerts
|

Functional Principal Component Regression and Functional Partial Least‐squares Regression: An Overview and a Comparative Study

Abstract: Functional data analysis is a field of growing importance in Statistics. In particular, the functional linear model with scalar response is surely the model that has attracted more attention in both theoretical and applied research. Two of the most important methodologies used to estimate the parameters of the functional linear model with scalar response are functional principal component regression and functional partial least-squares regression. We provide an overview of estimation methods based on these met… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
39
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 71 publications
(43 citation statements)
references
References 38 publications
2
39
0
Order By: Relevance
“…In the latter case, the principal component method reaches the same level of error as the conjugate gradient method only when m = 10 or more. These findings agree with the theoretical result of Proposition 2 and also with conclusions of Delaigle and Hall (2012a) and Febrero-Bande et al (2017) who point out that principal components need more degrees of freedom than partial least squares to reach good performance. In this regard ridge regularization seems to be between the two subspace methodologies.…”
Section: Behaviour Of Regularized Classifiers On Complete Datasupporting
confidence: 92%
“…In the latter case, the principal component method reaches the same level of error as the conjugate gradient method only when m = 10 or more. These findings agree with the theoretical result of Proposition 2 and also with conclusions of Delaigle and Hall (2012a) and Febrero-Bande et al (2017) who point out that principal components need more degrees of freedom than partial least squares to reach good performance. In this regard ridge regularization seems to be between the two subspace methodologies.…”
Section: Behaviour Of Regularized Classifiers On Complete Datasupporting
confidence: 92%
“…We follow the standard theoretical framework in FDA, which assumes that functions are realvalued and belong to a Hilbert space containing square-integrable functions over the observed range of wavelengths (Febrero-Bande et al, 2017;Reiss et al, 2017).…”
Section: Spectra As Functional Datamentioning
confidence: 99%
“…can only be defined on the Reproducing Kernel Hilbert space (RKHS) R 1/2 0 (H) of ε n , n ∈ Z (see Bosq, 2000; Da Prato and Zabczyk, 2002, Chapter 1, pp. [12][13][14][15][16].…”
Section: Remarkmentioning
confidence: 99%