2017
DOI: 10.1080/00401706.2016.1153522
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Likelihood Estimation for Stochastic Differential Equations Using Sequential Gaussian-Process-Based Optimization

Abstract: Stochastic Differential Equations (SDEs) are used as statistical models in many disciplines. However, intractable likelihood functions for SDEs make inference challenging, and we need to resort to simulation-based techniques to estimate and maximize the likelihood function. While sequential Monte Carlo methods have allowed for the accurate evaluation of likelihoods at fixed parameter values, there is still a question of how to find the maximum likelihood estimate. In this article we propose an efficient Gaussi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 52 publications
0
8
0
Order By: Relevance
“…When the robot is in motion, the coordinates of tag are always changing and only one pair coordinate can be measured at the time. The maximum likelihood estimation (MLE), a regression algorithm, can be used to predict the positioning information of the dynamic operation due to its consistency, validity and invariance [23, 24]. However, since MLE algorithm is able to predict only one value at a time and horizontal and vertical coordinates of the measured data change simultaneously [25], the coordinates need to be represented by only one parameter, i.e.…”
Section: Discussionmentioning
confidence: 99%
“…When the robot is in motion, the coordinates of tag are always changing and only one pair coordinate can be measured at the time. The maximum likelihood estimation (MLE), a regression algorithm, can be used to predict the positioning information of the dynamic operation due to its consistency, validity and invariance [23, 24]. However, since MLE algorithm is able to predict only one value at a time and horizontal and vertical coordinates of the measured data change simultaneously [25], the coordinates need to be represented by only one parameter, i.e.…”
Section: Discussionmentioning
confidence: 99%
“…In the stochastic example, the loss function is set to the negative log-likelihood function (4.3) together with the Euler-Maruyama integrator [32]. The same loss function is also used in [22,33], which is defined by…”
Section: (C) Loss Functionmentioning
confidence: 99%
“…In the stochastic example, the loss function is set to the negative log-likelihood function (4.3) together with the Euler–Maruyama integrator [32]. The same loss function is also used in [22,33], which is defined by Lfalse(θfalse)=1Nnormaltrajk=1Ntraj1Tj=1Tlogpfalse(zjfalse(kfalse)|zj1false(kfalse),θfalse), where pfalse(zjfalse(kfalse)|zj1false(kfalse),θfalse) is the probability density function of multivariate normal distribution with mean normalΔtjμNNfalse(zj1false(kfalse)false) and covariance matrix 2kBnormalΔtjMNNfalse(zj1false(kfalse)false) evaluated a...…”
Section: Numerical Examplesmentioning
confidence: 99%
“…The original surface (A,E), Gaussian process (B,F), LapRLS (C,G), and L-DPGP (D,H). The proposed framework is able to retain the Gaussian-process expected values across the domain, while also incorporating Laplacian regularization for the spatial, response, and gradient geometry preservation of the covariance matrix in (13), estimating the gradients and responses at all locations with an initial GP provide a more useful feature extraction that can be employed by the proposed model. As a result, the L-DPGP framework provides a more accurate representation of the thinly elevated profile of the true surface shown in Figure 2E.…”
Section: Framework Formulationmentioning
confidence: 99%
“…Various aspects of the GP, including model selection and adaptation of hyperparameters, applications in regression and classification problems, and relationship with other estimation models are extensively discussed in the literature. 2,[12][13][14][15] In many situations involving the estimation of noisy black-box functions, in addition to the (small) initial set of measured settings (experiments), there is a large set of unmeasured settings, that may be used to improve the estimation of the underlying function. 16 Semi-supervised learning is a technique that integrates the information of measured and unmeasured data points to develop better hypotheses about the underlying function and improve the accuracy of predictive modeling.…”
Section: Introductionmentioning
confidence: 99%