2020
DOI: 10.1109/tsp.2020.2997940
|View full text |Cite
|
Sign up to set email alerts
|

Improved Most Likely Heteroscedastic Gaussian Process Regression via Bayesian Residual Moment Estimator

Abstract: This paper proposes an improved most likely heteroscedastic Gaussian process (MLHGP) algorithm to handle a kind of nonlinear regression problems involving input-dependent noise. The improved MLHGP follows the same learning scheme as the current algorithm by use of two Gaussian processes (GPs), with the first GP for recovering the unknown function and the second GP for modeling the input-dependent noise. Unlike the current MLHGP pursuing an empirical estimate of the noise level which is provably biased in most … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…Notice that the regular dropout could be interpreted as a Bayesian approximation of a Gaussian process [ 31 ]. By applying the dropout during both training and inference, it is possible to analyze it as if predictions from many different networks have been made, that is, a Monte Carlo sample from the space of all possible networks.…”
Section: Methodsmentioning
confidence: 99%
“…Notice that the regular dropout could be interpreted as a Bayesian approximation of a Gaussian process [ 31 ]. By applying the dropout during both training and inference, it is possible to analyze it as if predictions from many different networks have been made, that is, a Monte Carlo sample from the space of all possible networks.…”
Section: Methodsmentioning
confidence: 99%
“…This sample deviation can be used as an alternative to Equation () for the variance‐field observations, and requires to first establish the prediction of truem̂(x)$\hat{m}({{\bf x}})$. Different smoothing techniques 44–46 can be used to approximate truem̂(x)$\hat{m}({{\bf x}})$, and the one adopted here is to introduce an additional GP with homoskedesity assumption 30,36 . This tertiary GP will be denoted herein as GP m .…”
Section: Stochastic Emulation For Edp Distribution Approximation With...mentioning
confidence: 99%
“…Different smoothing techniques [44][45][46] can be used to approximate m(𝐱), and the one adopted here is to introduce an additional GP with homoskedesity assumption. 30,36 This tertiary GP will be denoted herein as GP m . The workflow for this updated stochastic emulation algorithm, denoted ER-SE, is shown in Figure 1 and summarized below.…”
Section: Algorithmic Implementation Of the Improved Stochastic Emulationmentioning
confidence: 99%
See 1 more Smart Citation
“…The most commonly used method for probabilistic evaluation is the Bayesian theory, which utilises sampling information to correct prior probabilities to obtain posterior probabilities so that the distribution of posterior probabilities moves closer to the true distribution [18–21]. The Bayes formula [20] is p(ab)=p(ba)p(a)ffalse(bfalse)p(ba)p(a)\begin{equation}p(a\left| b \right.)…”
Section: Basic Principlesmentioning
confidence: 99%