2021
DOI: 10.48550/arxiv.2101.05328
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Uniform Error and Posterior Variance Bounds for Gaussian Process Regression with Application to Safe Control

Armin Lederer,
Jonas Umlauft,
Sandra Hirche

Abstract: In application areas where data generation is expensive, Gaussian processes are a preferred supervised learning model due to their high data-efficiency. Particularly in model-based control, Gaussian processes allow the derivation of performance guarantees using probabilistic model error bounds. To make these approaches applicable in practice, two open challenges must be solved i) Existing error bounds rely on prior knowledge, which might not be available for many real-world tasks. (ii) The relationship between… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…We tune the hyper-parameters by solving the optimization problem (43). Besides, it is shown by Lederer et al 33,34 that the approximation errors of GPR uniformly converge to zero with a sufficient amount of well-distributed training data.…”
Section: Learning Of the Value Function-nonnegativity-enforced Gprmentioning
confidence: 99%
See 1 more Smart Citation
“…We tune the hyper-parameters by solving the optimization problem (43). Besides, it is shown by Lederer et al 33,34 that the approximation errors of GPR uniformly converge to zero with a sufficient amount of well-distributed training data.…”
Section: Learning Of the Value Function-nonnegativity-enforced Gprmentioning
confidence: 99%
“…GPR stands out from many machine learning techniques for its ability to generalize well to small training sets and to provide a measure of its own inaccuracy. 33,34 Recalling the procedure in Section 3, at each iteration t, by concatenating the q training input data and the corresponding training output data into matrices Z t and Y t , respectively, the following training data dictionary  t is obtained:…”
Section: Learning Of the Value Function-nonnegativity-enforced Gprmentioning
confidence: 99%
“…Note that the bound l σ does not depend neither on the chosen regulator parameters, nor on the initial conditions. Consider now the jump dynamics, under the Assumption 2.5 of Lipschitz continuous kernel, we can explicitly derive an upper bound on the value of σ 2 + at each jump (see [35,Theorem 1])…”
Section: Gaussian Process-based Adaptive Regulationmentioning
confidence: 99%
“…An alternative hypothesis is to take the support of the prior distribution of the GP as the belief space from which to seek the true function. This hypothesis has been employed in stochastic bandit problems based on GPs [33,34] and more recently has been used to establish general interpretable bounds for basic GP models [19,35]. The sample space is the largest possible space of candidate functions, and leads to bounds that can be approximated for common settings with relative ease, in comparison to the RKHS approaches.…”
Section: Error Bounds For the Univariate Resgpmentioning
confidence: 99%