2019
DOI: 10.48550/arxiv.1909.09884
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control

Abstract: Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world. Prior to their widespread adoption, safety guarantees are needed on the controller behaviour that properly take account of the uncertainty within the model as well as sensor noise. Bayesian neural networks, which assume a prior over the weights, have been shown capable of producing such uncertainty measures, but properties surrounding their safet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…We discuss reasons these methods are fooled under our framework in Sections 5 and 6. Statistical techniques for the quantification of adversarial robustness of BNNs have been introduced by Cardelli et al (2019a) and employed in (Michelmore et al, 2019) to detect erroneous behaviours in the context of autonomous driving. Furthermore, in (Ye & Zhu, 2018) a Bayesian approach has been considered in the context of adversarial training, where the authors showed improved performances with respect to other, non-Bayesian, adversarial training approaches.…”
Section: Introductionmentioning
confidence: 99%
“…We discuss reasons these methods are fooled under our framework in Sections 5 and 6. Statistical techniques for the quantification of adversarial robustness of BNNs have been introduced by Cardelli et al (2019a) and employed in (Michelmore et al, 2019) to detect erroneous behaviours in the context of autonomous driving. Furthermore, in (Ye & Zhu, 2018) a Bayesian approach has been considered in the context of adversarial training, where the authors showed improved performances with respect to other, non-Bayesian, adversarial training approaches.…”
Section: Introductionmentioning
confidence: 99%
“…Åsljung et al (2017) used extreme value theory to model the safety of AV. Michelmore et al (2019) designed a statistical framework to evaluate the safety of deep neural network controllers and assessed the safety of AV. Burton et al (2020) provided a multidisciplinary perspective on the safety of AV from engineering, ethics and law aspects.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In either case, a large number of samples (e.g. potential positions of obstacles) is made available, leading to a requirement for an approach that reasons about the magnitude of the uncertainty based on these samples and ensures safety while allowing for real-time computation [20]. The approach proposed in this work addresses this issue of generating safe trajectories while working with an arbitrary finite number of samples.…”
Section: Introductionmentioning
confidence: 99%