Abstract-Safety critical systems with decisional abilities, such as autonomous robots, are about to enter our everyday life. Nevertheless, confidence in their behavior is still limited, particularly regarding safety. Considering the variety of hazards that can affect these systems, many techniques might be used to increase their safety. Among them, active safety monitors are a means to maintain the system safety in spite of faults or adverse situations. The specification of the safety rules implemented in such devices is of crucial importance, but has been hardly explored so far. In this paper, we propose a complete framework for the generation of these safety rules based on the concept of safety margin. The approach starts from a hazard analysis, and uses formal verification techniques to automatically synthesize the safety rules. It has been successfully applied to an industrial use case, a mobile manipulator robot for co-working.
Autonomous systems operating in the vicinity of humans are critical in that they potentially harm humans. As the complexity of autonomous system software makes the zero-fault objective hardly attainable, we adopt a fault-tolerance approach. We consider a separate safety channel, called a monitor, that is able to partially observe the system and to trigger safety-ensuring actuations. A systematic process for specifying a safety monitor is presented. Hazards are formally modeled, based on a risk analysis of the monitored system. A model-checker is used to synthesize monitor behavior rules that ensure the safety of the monitored system. Potentially excessive limitation of system functionality due to presence of the safety monitor is addressed through the notion of permissiveness. Tools have been developed to assist the process.
The overall dependability of an interactive system is one of its weakest components, which is usually its user interface. The presented approach integrates techniques from the dependable computing field and elements of the user-centered design. Risk analysis and fault-tolerance techniques are used in combination with task analysis and modeling to describe and analyze the impact of system faults on human activities and the impact of human deviation or errors on system performance and overall mission performance. A technique for systematic analysis of human errors, effects, and criticality (HEECA) is proposed. It is inspired and adapted from the Failure Mode, Effects, and Criticality Analysis technique. The key points of the approach are: 1) the HEECA technique combining a systematic analysis of the effects of system faults and of human errors; and 2) a task modeling notation to describe and to assess the impact of system faults and human errors on operators' activities and system performance. These key points are illustrated on an example extracted from a case study of the space domain. It demonstrates the feasibility of this approach as well as its benefits in terms of identifying opportunities for redesigning the system, redesigning the operations, and for modifying operators' training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.