2019
DOI: 10.1007/978-3-030-26250-1_30
|View full text |Cite
|
Sign up to set email alerts
|

Confidence Arguments for Evidence of Performance in Machine Learning for Highly Automated Driving Functions

Abstract: Due to their ability to efficiently process unstructured and highly dimensional input data, machine learning algorithms are being applied to perception tasks for highly automated driving functions. The consequences of failures and insufficiencies in such algorithms are severe and a convincing assurance case that the algorithms meet certain safety requirements is therefore required. However, the task of demonstrating the performance of such algorithms is non-trivial, and as yet, no consensus has formed regardin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 26 publications
(19 citation statements)
references
References 14 publications
1
17
0
Order By: Relevance
“…Alternatively, one could abstract the perception and planning subsystems such that test results of the perception subsystem can be re-used for varying planners. For this purpose, modular functional system architectures [5,38,42,116] could be implemented with contracts, assumptions, and guarantees at the interfaces between the perception and planning subsystems [22,35,45,60].…”
Section: Modeling the Perception-control Linkagementioning
confidence: 99%
“…Alternatively, one could abstract the perception and planning subsystems such that test results of the perception subsystem can be re-used for varying planners. For this purpose, modular functional system architectures [5,38,42,116] could be implemented with contracts, assumptions, and guarantees at the interfaces between the perception and planning subsystems [22,35,45,60].…”
Section: Modeling the Perception-control Linkagementioning
confidence: 99%
“…An argument that the safety requirements on the ML system are free from significant specification insufficiencies is an essential part of safety assurance activities, but will not be elaborated in detail in this paper. In [7], the concept of the safety contract for an ML system was expressed as the following condition that must be fulfilled for the ML system to be considered safe.…”
Section: Related Workmentioning
confidence: 99%
“…Validation targets are assessed by evaluating a sample set S of the entire input space I, limited to those inputs that fulfill the assumptions A. This leads to the following generic definition of the condition for meeting a validation target V T , where E is an evidence function that returns some value based on the sample set S and the model M S (also derived from [7]).…”
Section: Fig 2 Summary Of Safety Analysis Approachmentioning
confidence: 99%
“…RQ9 and RQ10 capture these requirements with referenced to the ODD, it is crucial therefore that the ODD is clearly documented and validated as part of the vehicle safety process. As well as exploring the scope of the ODD to consider different situations, we must also consider the impact on the images of the distance of Ego from the crossing (affecting the size of image features), and the possibility of occlusions in the image (we have discussed these effects in more detail in [2]. RQ11 and RQ12 address this issue.…”
Section: Rq13mentioning
confidence: 99%
“…In our ongoing work, we intend to extend this to consider ML verification and deployment, which are two crucial aspects for a compelling safety case. Furthermore, formalizing these requirements in contract-based design allows machine support for refinement checks within a component-based system [2]. We hope that this work is of benefit to both researchers and engineers and helps inform the current debate concerning the safety assurance and regulation of autonomous driving.…”
Section: Rq3mentioning
confidence: 99%