Deep learning (DL) is seen as an inevitable building block for perceiving the environment with sufficient detail and accuracy as required by automated driving functions. Despite this, its black-box nature and the therewith intertwined unpredictability still hinders its use in safety-critical systems. As such, this work addresses the problem of making this seemingly unpredictable nature measurable by providing a risk-based verification strategy, such as required by ISO 21448. In detail, a method is developed to break down acceptable risk into quantitative performance targets of individual DL-based components along the perception architecture. To verify these targets, the DL input space is split into areas according to the dimensions of a finegrained operational design domain (µODD). As it is not feasible to reach full test coverage, the strategy suggests to distribute test efforts across these areas according to the associated risk. Moreover, the testing approach provides answers with respect to how much test coverage and confidence in the test result is required and how these figures relate to safety integrity levels (SILs).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.