Autonomous robotic systems are complex, hybrid, and often safety-critical; this makes their formal specification and verification uniquely challenging. Though commonly used, testing and simulation alone are insufficient to ensure the correctness of, or provide sufficient evidence for the certification of, autonomous robotics. Formal methods for autonomous robotics has received some attention in the literature, but no resource provides a current overview. This paper systematically surveys the state-of-the-art in formal specification and verification for autonomous robotics. Specially, it identifies and categorises the challenges posed by, the formalisms aimed at, and the formal approaches for the specification and verification of autonomous robotics. Introduction, Methodology and Related WorkAn autonomous system is an artificially intelligent entity that makes decisions in response to input, independent of human interaction. Robotic systems are physical entities that interact with the physical world. Thus, we consider an autonomous robotic system as a machine that uses Artificial Intelligence (AI), has a physical presence in and interacts with the real world. They are complex, inherently hybrid, systems, combining both hardware and software; they often require close safety, legal, and ethical consideration. Autonomous robotics are increasingly being used in commonplace-scenarios, such as driverless cars [68], pilotless aircraft [176], and domestic assistants [174,60].While for many engineered systems, testing, either through real deployment or via simulation, is deemed sufficient; the unique challenges of autonomous robotics, their dependence on sophisticated software control and decision-making, and their increasing deployment in safety-critical scenarios, require a stronger form of verification. This leads us towards using formal methods, which are mathematically-based techniques for the specification and verification of software systems, to ensure the correctness of, and provide sufficient evidence for the certification of, robotic systems.We contribute an overview and analysis of the state-of-the-art in formal specification and verification of autonomous robotics. §1.1 outlines the scope, research questions and search criteria for our survey. §1.2 describes related work concerning formal methods for robotics and differentiates them from our work. We recognise the important role that middleware architectures and, non-and semi-formal techniques have in the development of reliable robotics and we briefly summarise some of these techniques in §2. The specification and verification challenges raised by autonomous robotic systems are discussed next: §3 describes the challenges of their context (the external challenges) and §4 describes the challenges of their organisation (the internal challenges). §5 discusses the formalisms used in the literature for specification and verification of autonomous robotics. §6 characterises the approaches to formal specification and verification of autonomous robotics found in the li...
In this paper we describe a verification system for multi-agent programs. This is the first comprehensive approach to the verification of programs developed using programming languages based on the BDI (belief-desire-intention) model of agency. In particular, we have developed a specific layer of abstraction, sitting between the underlying verification system and the agent programming language, that maps the semantics of agent programs into the relevant model-checking framework. Crucially, this abstraction layer is both flexible and extensible; not only can a variety of different agent programming languages be implemented and verified, but even heterogeneous multi-agent programs can be captured semantically. In addition to describing this layer, and the semantic mapping inherent within it, we describe how the underlying model-checker is driven and how agent properties are checked. We also present several examples showing how the system can be used. As this is the first system of its kind, it is relatively slow, so we also indicate further work that needs to be tackled to improve performance.
Exploring autonomous systems and the agents that control them.
The coordination of multiple autonomous vehicles into convoys or platoons is expected on our highways in the near future. However, before such platoons can be deployed, the new autonomous behaviours of the vehicles in these platoons must be certified. An appropriate representation for vehicle platooning is as a multiagent system in which each agent captures the "autonomous decisions" carried out by each vehicle. In order to ensure that these autonomous decision-making agents in vehicle platoons never violate safety requirements, we use formal verification. However, as the formal verification technique used to verify the agent code does not scale to the full system and as the global verification technique does not capture the essential verification of autonomous behaviour, we use a combination of the two approaches. This mixed strategy allows us to verify safety requirements not only of a model of the system, but of the actual agent code used to program the autonomous vehicles.(V2V) communication is used at a lower (continuous control system) level to adjust each vehicle's position in the lanes and the spacing between the vehicles. V2V is also used at higher levels, for example to communicate joining requests, leaving requests, or commands dissolving the platoon. So a traditional approach is to implement the software for each vehicle in terms of hybrid (and hierarchical) control systems and to analyse this using hybrid systems techniques.However, as the behaviours and requirements of these automotive platoons become more complex there is a move towards much greater autonomy within each vehicle. Although the human in the vehicle is still responsible, the autonomous control deals with much of the complex negotiation to allow other vehicles to leave and join, etc. Traditional approaches involve hybrid automata [12] in which the continuous aspects are encapsulated within discrete states, while discrete behaviours are expressed as transitions between these states. A drawback of combining discrete decision-making and continuous control within a hybrid automaton is that it is difficult to separate the two (high-level decision-making and continuous control) concerns. In addition, the representation of the high-level decision-making can become unnecessarily complex.As is increasingly common within autonomous systems, we use a hybrid autonomous systems architecture where not only is the discrete decision-making component separated from the continuous control system, but the behaviour of the discrete part is described in much more detail. In particular, the agent paradigm is used [26]. This style of architecture, using the agent paradigm, not only improves the system design from an engineering perspective but also facilitates the system analysis and verification. Indeed, we use this architecture for actually implementing automotive platoons, and we here aim to analyse the system by verification.Safety certification is an inevitable concern in the development of more autonomous road vehicles, and verifying the safety and reli...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.