2002
DOI: 10.1093/biomet/89.1.159
|View full text |Cite
|
Sign up to set email alerts
|

Spectral models for covariance matrices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
86
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 70 publications
(87 citation statements)
references
References 52 publications
1
86
0
Order By: Relevance
“…Prominent examples of this pooling phenomenon include model-based principal component analysis [19,21]; model-based cluster analysis and discriminant analysis [31,2], longitudinal data analysis [14], and multivariate volatility in finance [7,17] where the number of covariances to be estimated could be as large as the number of observations. Some of the most commonly used methods for handling several covariance matrices in the literature of multivariate statistics, the biomedical sciences, and financial econometrics are based on the spectral decomposition [19,21,4,23], the variance-correlation decomposition [30,3], and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models [6,17]. It is conceivable that a framework like Nelder and Wedderburn's [32] generalized linear models (GLM) could be used to compare, unify and possibly generalize the above approaches to covariance modelling.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Prominent examples of this pooling phenomenon include model-based principal component analysis [19,21]; model-based cluster analysis and discriminant analysis [31,2], longitudinal data analysis [14], and multivariate volatility in finance [7,17] where the number of covariances to be estimated could be as large as the number of observations. Some of the most commonly used methods for handling several covariance matrices in the literature of multivariate statistics, the biomedical sciences, and financial econometrics are based on the spectral decomposition [19,21,4,23], the variance-correlation decomposition [30,3], and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models [6,17]. It is conceivable that a framework like Nelder and Wedderburn's [32] generalized linear models (GLM) could be used to compare, unify and possibly generalize the above approaches to covariance modelling.…”
Section: Introductionmentioning
confidence: 99%
“…While the entries of the correlation and orthogonal matrices appearing in the variance-correlation and spectral decompositions are always constrained, those appearing in the unit lower triangular matrix of the Cholesky decomposition, referred to as the generalized autoregressive parameters (GARP), are always unconstrained [33,34]. Consequently, computing the maximum likelihood estimates (MLE) of the Cholesky decomposition involves unconstrained optimization, unlike the algorithms needed for estimation with the other two decompositions; see [22,3,4].…”
Section: Introductionmentioning
confidence: 99%
“…Flury considers a spectral decomposition [14][15][16] and allows 'commonality' of the eigenvectors (and variations) across groups, while the eigenvalues are allowed to differ. Boik [4] generalized some of this work allowing finer models for the eigenvectors and structured models for the eigenvalues. Manly and Rayner [21], using a variance/correlation decomposition of the covariance matrix, develop a hierarchy of models for covariance matrices across groups, including proportional covariance matrices and a common correlation matrix across the groups.…”
Section: Introductionmentioning
confidence: 99%
“…More generally, the population covariance or correlation matrix is parametrized as a function of a lower-dimensional vector, θ , and θ is estimated by minimizing a discrepancy function (Browne, 1984;Shapiro, 2007). This is the conventional approach to estimation in factor analysis models and it also is applicable to principal component models (Boik, 2002(Boik, , 2003(Boik, , 2005.…”
Section: Asymptotic Distributionsmentioning
confidence: 99%