This paper describes the application of machine learning techniques and an associated assurance case for a safety-relevant chassis control system. The method applied during the assurance process is described including the sources of evidence and deviations from previous ISO 26262 based approaches. The paper highlights how the choice of machine learning approach supports the assurance case, especially regarding the inherent explainability of the algorithm and its robustness to minor input changes. In addition, the challenges that arise if applying more complex machine learning technique, for example in the domain of automated driving, are also discussed. The main contribution of the paper is the demonstration of an assurance approach for machine learning for a comparatively simple function. This allowed the authors to develop a convincing assurance case, whilst identifying pragmatic considerations in the application of machine learning for safety-relevant functions.
Simulations are commonly used to validate the design of autonomous systems. However, as these systems are increasingly deployed into safety-critical environments with aleatoric uncertainties, and with the increase in components that employ machine learning algorithms with epistemic uncertainties, validation methods which consider uncertainties are lacking. We present an approach that evaluates signal propagation in logical system architectures, in particular environment perception-chains, focusing on effects of uncertainty to determine functional limitations. The perception based autonomous driving systems are represented by connected elements to constitute a certain functionality. The elements are based on (meta-)models to describe technical components and their behavior. The surrounding environment, in which the system is deployed, is modeled by parameters that are derived from a quasi-static scene. All parameter variations completely define inputstates for the designed perception architecture. The input-states are treated as random variables inside the model of components to simulate aleatoric/epistemic uncertainty. The dissimilarity between the modelinput and -output serves as measure for total uncertainty present in the system. The uncertainties are propagated through consecutive components and calculated by the same manner. The final result consists of input-states which model uncertainty effects for the specified functionality and therefore highlight shortcomings of the designed architecture.
Deep learning (DL) is seen as an inevitable building block for perceiving the environment with sufficient detail and accuracy as required by automated driving functions. Despite this, its black-box nature and the therewith intertwined unpredictability still hinders its use in safety-critical systems. As such, this work addresses the problem of making this seemingly unpredictable nature measurable by providing a risk-based verification strategy, such as required by ISO 21448. In detail, a method is developed to break down acceptable risk into quantitative performance targets of individual DL-based components along the perception architecture. To verify these targets, the DL input space is split into areas according to the dimensions of a finegrained operational design domain (µODD). As it is not feasible to reach full test coverage, the strategy suggests to distribute test efforts across these areas according to the associated risk. Moreover, the testing approach provides answers with respect to how much test coverage and confidence in the test result is required and how these figures relate to safety integrity levels (SILs).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.