2020
DOI: 10.1007/978-3-030-54549-9_13
|View full text |Cite
|
Sign up to set email alerts
|

Assuring the Safety of Machine Learning for Pedestrian Detection at Crossings

Abstract: Machine Learnt Models (MLMs) are now commonly used in self-driving cars, particularly for tasks such as object detection and classification within the perception pipeline. The failure of such models to perform as intended could lead to hazardous events such as failing to stop for a pedestrian at a crossing. It is therefore crucial that the safety of the MLM can be proactively assured and should be driven by explicit and concrete safety requirements. In our previous work, we defined a process that integrates th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 33 publications
(21 citation statements)
references
References 11 publications
0
21
0
Order By: Relevance
“…Previous work on the safety assurance of machine learning has focused on the structure of the assurance case and associated processes with respect to exist-ing safety standards [6,27,16,3,10]. Other work has focused on the effectiveness of specific metrics and measures on providing meaningful statements related to safety properties of the ML function [11,20,29,12].…”
Section: Related Workmentioning
confidence: 99%
“…Previous work on the safety assurance of machine learning has focused on the structure of the assurance case and associated processes with respect to exist-ing safety standards [6,27,16,3,10]. Other work has focused on the effectiveness of specific metrics and measures on providing meaningful statements related to safety properties of the ML function [11,20,29,12].…”
Section: Related Workmentioning
confidence: 99%
“…Methods presented earlier are cited as being a way to tackle precise problems, but they also make the case for more traditional techniques such as; redundancy and fault tolerance subsystems, look ahead components, backup systems, coverage criteria or traceability through collection of Neural Networks related artifacts such as weights or versions. For instance, [71] proposed an approach where the requirements of a ML model drives the safety assurance process for the ML model. The process splits into 5 stages: requirements elicitation, data management, model learning, model verification and model deployment.…”
Section: Direct Certificationmentioning
confidence: 99%
“…-NN-dependability-kit [39] is a data driven toolbox that aims to provide directives to ensure uncertainty reduction in all the life cycle steps, notably robustness analysis with perturbation metrics and t-way coverage. This technique was used by [71] mentioned in the previous section with its requirements-driven safety assurance approach for ML. Building upon NN-dependability-kit, "specific requirements that are explicitly and traceably linked to system-level safety analysis" were tackled.…”
Section: Othersmentioning
confidence: 99%
“…Deep neural network (DNN) image classifiers are increasingly being proposed for use in safety critical applications [6,15,19,24], where their accuracy is quoted as close to, or exceeding, that of human operators [3]. It has been shown, however, that when the inputs to the classifier are subjected to small perturbations, even highly accurate DNNs can produce erroneous results [8,9,30].…”
Section: Introductionmentioning
confidence: 99%