A crucial aspect of physical human-robot collaboration (HRC) is to maintain a safe common workspace for human operator. However, close proximity between human-robot and unpredictability of human behavior raises serious challenges in terms of safety. This article proposes a risk analysis methodology for collaborative robotic applications, which is compatible with well-known standards in the area and relies on formal verification techniques to automate the traditional risk analysis methods. In particular, the methodology relies on temporal logic-based models to describe the different possible ways in which tasks can be carried out, and on fully automated formal verification techniques to explore the corresponding state space to detect and modify the hazardous situations at early stages of system design.Index Terms-Formal methods, human-robot collaboration (HRC), model-based risk assessment, robot safety, temporal logic.
I. INTRODUCTIONH UMAN-ROBOT collaboration (HRC) in industrial settings enhances the flexibility and adaptability of robotic systems for production. Close proximity and hybrid task assignment between humans and robots have substantial impacts on the safety of human operators. Most importantly, shared dynamic environments cannot be effectively supported by static safety analyses. In particular, the hybrid human-robot tasks generate different possible execution workflows. Therefore, various task assignments among humans and robots can also change during operations. In realistic scenarios, the environment can also change, due to mobile resources, tool changing, modification of the layout, and process locations, so that the specifications of the system need to be re-evaluated. The safety of such evolving systems should not be verified as a static property, but rather according to the behavior of the system in actual situations.