2020
DOI: 10.48550/arxiv.2012.08863
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On The Verification of Neural ODEs with Stochastic Guarantees

Abstract: We show that Neural ODEs, an emerging class of timecontinuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…OptNet with CBFs has been used in neural networks as a filter for safe controls [36], but OptNet is not trainable, thus, potentially limiting the system's learning performance. In [14,25,59,16,17], safety guaranteed neural network controllers have been learned through verification-in-the-loop training. A safe neural network filter has been proposed in [15] for a specific vehicle model using verification methods.…”
Section: Related Workmentioning
confidence: 99%
“…OptNet with CBFs has been used in neural networks as a filter for safe controls [36], but OptNet is not trainable, thus, potentially limiting the system's learning performance. In [14,25,59,16,17], safety guaranteed neural network controllers have been learned through verification-in-the-loop training. A safe neural network filter has been proposed in [15] for a specific vehicle model using verification methods.…”
Section: Related Workmentioning
confidence: 99%
“…A more rigorous approach, albeit expensive, is to compute an upper-bound of the loss of each safety domain and minimize the upper bound via stochastic gradient descent [15], [53]. While computing an upper-bound of a network's output is difficult and may overestimate the true maximum, it provides certified guarantees on the worst-case loss.…”
Section: Safety-domain Training Generalizes Adversarial Training and ...mentioning
confidence: 99%