2019
DOI: 10.1214/19-ejs1587
|View full text |Cite
|
Sign up to set email alerts
|

Maximum likelihood estimation for Gaussian processes under inequality constraints

Abstract: We consider covariance parameter estimation for a Gaussian process under inequality constraints (boundedness, monotonicity or convexity) in fixed-domain asymptotics. We address the estimation of the variance parameter and the estimation of the microergodic parameter of the Matérn and Wendland covariance functions. First, we show that the (unconstrained) maximum likelihood estimator has the same asymptotic distribution, unconditionally and conditionally to the fact that the Gaussian process satisfies the inequa… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 45 publications
0
17
0
Order By: Relevance
“…For other work related to GP misspecification and kernel parameter estimation in a variety of settings, see Stein (1993), Bachoc (2013), Bachoc, Lagnoux, and Nguyen (2017), Bachoc (2017), Lopera-L\' opez (2019), andTeckentrup (2019).…”
Section: Scale Parameter Estimationmentioning
confidence: 99%
“…For other work related to GP misspecification and kernel parameter estimation in a variety of settings, see Stein (1993), Bachoc (2013), Bachoc, Lagnoux, and Nguyen (2017), Bachoc (2017), Lopera-L\' opez (2019), andTeckentrup (2019).…”
Section: Scale Parameter Estimationmentioning
confidence: 99%
“…This approach has the advantages of avoiding the burden of a large training set that comes with a neural network model, and the inexact satisfaction of constraints that come with penalization of constraints in the loss function. There has been significant interest in the incorporation of constraints into Gaussian process regression (GPR) models recently (Bachoc et al, 2019;Da Veiga and Marrel, 2012;Jensen et al, 2013;López-Lopera et al, 2018;Raissi et al, 2017;Riihimäki and Vehtari, 2010;Solak et al, 2003;Yang et al, 2018). Many of these approaches leverage the analytic formulation of the GP to incorporate constraints through the likelihood function or i.e.…”
Section: Introductionmentioning
confidence: 99%
“…We can then provide the best LHD of EI with the Spearman distance. This LHD is given by the permutations σ 2 = (5, 2, 1, 7, 6, 3,4,8,11,13,12,9,10,14,15), 6,1,8,4,9,15,7,12,5,13,10,2,11,14). To conclude, the kernels on permutations provided in Section 2 enable us to use EI that gives much better results than simulated annealing or random sampling to find the best LHD.…”
Section: Application To the Optimization Of Latin Hypercube Designsmentioning
confidence: 99%
“…Nevertheless, remark that f is a positive function, whereas a Gaussian process realization can take negative values. In this case, different options are possible: firstly, we can ignore the information of the inequality constraint; secondly, we can use Gaussian process under inequality constraints (see [6]); thirdly, we can use a transformation of the function to remove the inequality constraint. We choose here the third strategy and we model log(f ) by a Gaussian process realization.…”
Section: Application To the Optimization Of Latin Hypercube Designsmentioning
confidence: 99%
See 1 more Smart Citation