1983
DOI: 10.1016/0304-4076(83)90066-0
|View full text |Cite
|
Sign up to set email alerts
|

Alternative algorithms for the estimation of dynamic factor, mimic and varying coefficient regression models

Abstract: This paper provides a general approach to the formulation and estimation of dynamic unobserved component models. After introducing the general model, two methods for estimating the unknown parameters are presented. Both are algorithms for maximizing the likelihood function. The first is based on the method of Scoring. The second is the EM algorithm, a derivative-free method. Each iteration of EM requires a Kalman filter and smoother followed by straightforward regression calculations. The paper suggests using … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
273
0
2

Year Published

2000
2000
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 408 publications
(276 citation statements)
references
References 11 publications
1
273
0
2
Order By: Relevance
“…In the case of the DFM, the algorithm alternates between the use of the Kalman smoother to estimate the common factors given a set of parameters (E-step), and multivariate regressions (corrected for the uncertainty in the estimation of the common factors) to estimate the parameters given the factors (M-step), see e.g. Watson and Engle (1983) or Shumway and Stoffer (1982). (…”
Section: Dynamic Factor Modelsmentioning
confidence: 99%
“…In the case of the DFM, the algorithm alternates between the use of the Kalman smoother to estimate the common factors given a set of parameters (E-step), and multivariate regressions (corrected for the uncertainty in the estimation of the common factors) to estimate the parameters given the factors (M-step), see e.g. Watson and Engle (1983) or Shumway and Stoffer (1982). (…”
Section: Dynamic Factor Modelsmentioning
confidence: 99%
“…The parameters are usually estimated by Maximum Likelihood (ML) maximizing the one-step-ahead decomposition of the log-Gaussian likelihood; see Engle and Watson (1981) and Watson and Engle (1983). The maximization of the log-likelihood entails nonlinear optimization which restricts the number of parameters that can be estimated and, consequently, the number of series that can be handled when estimating the underlying factors.…”
Section: Kalman …Lter and Smoothingmentioning
confidence: 99%
“…To generate the results in this paper we used the BFGS algorithm to perform the optimization; see, for example, Nocedal and Wright (1999). An alternative approach would be to use the EM algorithm as developed for state space models by Watson and Engle (1983).…”
Section: Parameter Estimation and Signal Extractionmentioning
confidence: 99%
“…Early contributions to this literature can be found in Sargent and Sims (1977), Geweke (1977), Geweke and Singleton (1981), Engle and Watson (1981), Watson and Engle (1983), Connor and Korajczyk (1993) and Gregory, Head, and Raynauld (1997). Most of these papers consider time series panels with limited panel dimensions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation