Full bibliographic details must be given when referring to, or quoting from full items including the author's name, the title of the work, publication details where relevant (place, publisher, date), pagination, and for theses or dissertations the awarding institution, the degree type awarded, and the date of the award.
Abstract. Reasoning about agents and modalities such as knowledge and belief leads to models where different relations over states co-exist, or equivalently, where information (labels, actions) is associated to state transitions. This paper discusses how to augment classical CTL symbolic model-checking to support logics with actions such as A-CTL (action-CTL), and how this can be implemented using BDDs in tools such as the SMV/NuSMV package. Considering general action-state structures, we first propose a natural extension of CTL to actions, called ActionRestricted CTL (ARCTL) and adapt classical results from CTL to express model checking based on three functions eax, eau and eag. On these grounds, we present two different implementations of symbolic model checking with actions. The first approach encodes action-state models and logics into pure state-based models and logics, that can be checked with existing model-checkers. The second approach consists in a native implementation of the three extended operators. We report on our prototype implementation of both approaches based on NuSMV and give an overview of how this is used to model-check the temporal epistemic logic CTLK.
Artificial Intelligence (AI) is useful. AI can deliver more functionality for reduced cost. AI should be used more widely but won't be unless developers can trust adaptive, nondeterministic, or complex AI systems.Verification and validation is one method used by software analysts to gain that trust. AI systems have features that make them hard to check using conventional V&V methods. Nevertheless, as we show in this chapter, there are enough alternative readily-available methods that enable the V&V of AI software.
Alternating-time Temporal Logic is a logic to reason about strategies that agents can adopt to achieve a specified collective goal. A number of extensions for this logic exist; some of them combine strategies and partial observability, some others include fairness constraints, but to the best of our knowledge no work provides a unified framework for strategies, partial observability and fairness constraints. Integration of these three concepts is important when reasoning about the capabilities of agents without full knowledge of a system, for instance when the agents can assume that the environment behaves in a fair way. We present ATLK irF , a logic combining strategies under partial observability in a system with fairness constraints on states. We introduce a model-checking algorithm for ATLK irF by extending the algorithm for a full-observability variant of the logic and we investigate its complexity. We validate our proposal with an experimental evaluation.
A number of extensions exist for Alternating-time Temporal Logic; some of these mix strategies and partial observability but, to the best of our knowledge, no work provides a unified framework for strategies, partial observability and fairness constraints. In this paper we propose ATLK^F_po, a logic mixing strategies under partial observability and epistemic properties of agents in a system with fairness constraints on states, and we provide a model checking algorithm for it
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.