The coordination of multiple autonomous vehicles into convoys or platoons is expected on our highways in the near future. However, before such platoons can be deployed, the new autonomous behaviours of the vehicles in these platoons must be certified. An appropriate representation for vehicle platooning is as a multiagent system in which each agent captures the "autonomous decisions" carried out by each vehicle. In order to ensure that these autonomous decision-making agents in vehicle platoons never violate safety requirements, we use formal verification. However, as the formal verification technique used to verify the agent code does not scale to the full system and as the global verification technique does not capture the essential verification of autonomous behaviour, we use a combination of the two approaches. This mixed strategy allows us to verify safety requirements not only of a model of the system, but of the actual agent code used to program the autonomous vehicles.(V2V) communication is used at a lower (continuous control system) level to adjust each vehicle's position in the lanes and the spacing between the vehicles. V2V is also used at higher levels, for example to communicate joining requests, leaving requests, or commands dissolving the platoon. So a traditional approach is to implement the software for each vehicle in terms of hybrid (and hierarchical) control systems and to analyse this using hybrid systems techniques.However, as the behaviours and requirements of these automotive platoons become more complex there is a move towards much greater autonomy within each vehicle. Although the human in the vehicle is still responsible, the autonomous control deals with much of the complex negotiation to allow other vehicles to leave and join, etc. Traditional approaches involve hybrid automata [12] in which the continuous aspects are encapsulated within discrete states, while discrete behaviours are expressed as transitions between these states. A drawback of combining discrete decision-making and continuous control within a hybrid automaton is that it is difficult to separate the two (high-level decision-making and continuous control) concerns. In addition, the representation of the high-level decision-making can become unnecessarily complex.As is increasingly common within autonomous systems, we use a hybrid autonomous systems architecture where not only is the discrete decision-making component separated from the continuous control system, but the behaviour of the discrete part is described in much more detail. In particular, the agent paradigm is used [26]. This style of architecture, using the agent paradigm, not only improves the system design from an engineering perspective but also facilitates the system analysis and verification. Indeed, we use this architecture for actually implementing automotive platoons, and we here aim to analyse the system by verification.Safety certification is an inevitable concern in the development of more autonomous road vehicles, and verifying the safety and reli...
Abstract. We report on experiences in the development of hybrid autonomous systems where high-level decisions are made by a rational agent. This rational agent interacts with other sub-systems via an abstraction engine. We describe three systems we have developed using the EASS BDI agent programming language and framework which supports this architecture. As a result of these experiences we recommend changes to the theoretical operational semantics that underpins the EASS framework and present a fourth implementation using the new semantics.
No abstract
The spread of autonomous systems into safety-critical areas has increased the demand for their formal verification, not only due to stronger certification requirements but also to public uncertainty over these new technologies. However, the complex nature of such systems, for example, the intricate combination of discrete and continuous aspects, ensures that whole system verification is often infeasible. This motivates the need for novel analysis approaches that modularise the problem, allowing us to restrict our analysis to one particular aspect of the system while abstracting away from others. For instance, while verifying the real-time properties of an autonomous system we might hide the details of the internal decision-making components. In this paper we describe verification of a range of properties across distinct dimesnions on a practical hybrid agent architecture. This allows us to verify the autonomous decision-making, real-time aspects, and spatial aspects of an autonomous vehicle platooning system. This modular approach also illustrates how both algorithmic and deductive verification techniques can be applied for the analysis of different system subcomponents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.