Iterated filtering algorithms are stochastic optimization procedures for latent variable models that recursively combine parameter perturbations with latent variable reconstruction. Previously, theoretical support for these algorithms has been based on the use of conditional moments of perturbed parameters to approximate derivatives of the log likelihood function. Here, a theoretical approach is introduced based on the convergence of an iterated Bayes map. An algorithm supported by this theory displays substantial numerical improvement on the computational challenge of inferring parameters of a partially observed Markov process. sequential Monte Carlo | particle filter | maximum likelihood | Markov processA n iterated filtering algorithm was originally proposed for maximum likelihood inference on partially observed Markov process (POMP) models by Ionides et al. (1). Variations on the original algorithm have been proposed to extend it to general latent variable models (2) and to improve numerical performance (3,4). In this paper, we study an iterated filtering algorithm that generalizes the data cloning method (5, 6) and is therefore also related to other Monte Carlo methods for likelihood-based inference (7-9). Data cloning methodology is based on the observation that iterating a Bayes map converges to a point mass at the maximum likelihood estimate. Combining such iterations with perturbations of model parameters improves the numerical stability of data cloning and provides a foundation for stable algorithms in which the Bayes map is numerically approximated by sequential Monte Carlo computations.We investigate convergence of a sequential Monte Carlo implementation of an iterated filtering algorithm that combines data cloning, in the sense of Lele et al. (5), with the stochastic parameter perturbations used by the iterated filtering algorithm of (1). Lindström et al. (4) proposed a similar algorithm, termed fast iterated filtering, but the theoretical support for that algorithm involved unproved conjectures. We present convergence results for our algorithm, which we call IF2. Empirically, it can dramatically outperform the previous iterated filtering algorithm of ref. 1, which we refer to as IF1. Although IF1 and IF2 both involve recursively filtering through the data, the theoretical justification and practical implementations of these algorithms are fundamentally different. IF1 approximates the Fisher score function, whereas IF2 implements an iterated Bayes map. IF1 has been used in applications for which no other computationally feasible algorithm for statistically efficient, likelihoodbased inference was known (10-15). The extra capabilities offered by IF2 open up further possibilities for drawing inferences about nonlinear partially observed stochastic dynamic models from time series data.Iterated filtering algorithms implemented using basic sequential Monte Carlo techniques have the property that they do not need to evaluate the transition density of the latent Markov process.Algorithms with this property...
We construct extremal stochastic integrals R e E f ðuÞM ðduÞ of a deterministic function f ðuÞ ! 0 with respect to a random ÀFré chet ( > 0) sup-measure. The measure M is sup-additive rather than additive and is defined over a general measure space ðE; E; "Þ, where " is a deterministic control measure. The extremal integral is constructed in a way similar to the usual Àstable integral, but with the maxima replacing the operation of summation. It is well-defined for arbitrary f ðuÞ ! 0; R E f ðuÞ "ðduÞ < 1, and the metric & ð f ; gÞ : ¼ R E j f ðuÞ À gðuÞ j"ðduÞ metrizes the convergence in probability of the resulting integrals.This approach complements the well-known de Haan's spectral representation of max-stable processes with ÀFré chet marginals. De Haan's representation can be viewed as the max-stable analog of the LePage series representation of Àstable processes, whereas the extremal integrals correspond to the usual Àstable stochastic integrals. We prove that essentially any strictly Àstable process belongs to the domain of max-stable attraction of an ÀFré chet, max-stable process. Moreover, we express the corresponding ÀFré chet processes in terms of extremal stochastic integrals, involving the kernel function of the Àstable process. The close correspondence between the max-stable and Àstable frameworks yields new examples of max-stable processes with non-trivial dependence structures.
We present efficient methods for simulation, using the Fast Fourier Transform (FFT) algorithm, of two classes of processes with symmetric α-stable (SαS) distributions. Namely, (i) the linear fractional stable motion (LFSM) process and (ii) the fractional autoregressive moving average (FARIMA) time series with SαS innovations. These two types of heavy-tailed processes have infinite variances and long-range dependence and they can be used in modeling the traffic of modern computer telecommunication networks.We generate paths of the LFSM process by using Riemann-sum approximations of its SαS stochastic integral representation and paths of the FARIMA time series by truncating their moving average representation. In both the LFSM and FARIMA cases, we compute the involved sums efficiently by using the Fast Fourier Transform algorithm and provide bounds and/or estimates of the approximation error.We discuss different choices of the discretization and truncation parameters involved in our algorithms and illustrate our method. We include MATLAB implementations of these simulation algorithms and indicate how the practitioner can use them.
Max-stable processes arise in the limit of component-wise maxima of independent processes, under appropriate centering and normalization. In this paper, we establish necessary and sufficient conditions for ergodicity and mixing of stationary max-stable processes. We do so in terms of their spectral representations by using extremal integrals. The large classes of moving maxima and mixed moving maxima processes are shown to be mixing. Other examples of ergodic doubly stochastic processes and non-ergodic processes are also given. The ergodicity conditions involve a certain measure of dependence. We relate this measure of dependence to the one of Weintraub (1991) and show that Weintraub's notion of '0-mixing' is equivalent to mixing. Consistent estimators for the dependence function of an ergodic max-stable process are introduced and illustrated over simulated data.
We develop classification results for max-stable processes, based on their spectral representations. The structure of max-linear isometries and minimal spectral representations play important roles. We propose a general classification strategy for measurable maxstable processes based on the notion of co-spectral functions. In particular, we discuss the spectrally continuous-discrete, the conservative-dissipative, and the positive-null decompositions. For stationary max-stable processes, the latter two decompositions arise from connections to nonsingular flows and are closely related to the classification of stationary sum-stable processes. The interplay between the introduced decompositions of max-stable processes is further explored. As an example, the Brown-Resnick stationary processes, driven by fractional Brownian motions, are shown to be dissipative.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.