A principal component method for multivariate functional data is proposed. Data can be arranged in a matrix whose elements are functions so that for each individual a vector of p functions is observed. This set of p curves is reduced to a small number of transformed functions, retaining as much information as possible. The criterion to measure the information loss is the integrated variance. Under mild regular conditions, it is proved that if the original functions are smooth this property is inherited by the principal components. A numerical procedure to obtain the smooth principal components is proposed and the goodness of the dimension reduction is assessed by two new measures of the proportion of explained variability. The method performs as expected in various controlled simulated data sets and provides interesting conclusions when it is applied to real data sets.
In this paper we introduce two procedures for variable selection in cluster analysis and classification rules. One is mainly oriented to detect the "noisy" non-informative variables, while the other deals also with multicolinearity. A forward-backward algorithm is also proposed to make feasible these procedures in large data sets. A small simulation is performed and some real data examples are analyzed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.