Abstract. A safety policy defines the set of rules that governs the safe interaction of agents operating together as part of a system of systems (SoS). Agent autonomy can give rise to unpredictable, and potentially undesirable, emergent behaviour. Deriving rules of safety policy requires an understanding of the capabilities of an agent as well as how its actions affect the environment and consequently the actions of others. Methods for multi-agent system design can aid in this understanding. Such approaches mention organisational rules. However, there is little discussion about how they are derived. This paper proposes modelling systems according to three viewpoints: an agent viewpoint, a causal viewpoint and a domain viewpoint. The agent viewpoint captures system capabilities and inter-relationships. The causal viewpoint describes the effect an agent's actions has on its environment as well as inter-agent influences. The domain viewpoint models assumed properties of the operating environment.
Access control is a critical feature of many systems, including networks of services, processes within a computer, and objects within a running process. The security consequences of a particular architecture or access control policy are often difficult to determine, especially where some components are not under our control, where components are created dynamically, or where access policies are updated dynamically.The SERSCIS Access Modeller (SAM) takes a model of a system and explores how access can propagate through it. It can both prove defined safety properties and discover unwanted properties. By defining expected behaviours, recording the results as a baseline, and then introducing untrusted actors, SAM can discover a wide variety of design flaws.SAM is designed to handle dynamic systems (i.e., at runtime, new objects are created and access policies modified) and systems where some objects are not trusted. It extends previous approaches such as Scollar and Authodox to provide a programmer-friendly syntax for specifying behaviour, and allows modelling of services with mutually suspicious clients.Taking the Confused Deputy example from Authodox we show that SAM detects the attack automatically; using a web-based backup service, we show how to model RBAC systems, detecting a missing validation check; and using a proxy certificate system, we show how to extend it to model new access mechanisms. On discovering that a library fails to follow an RFC precisely, we re-evaluate our existing models under the new assumption and discover that the proxy certificate design is not safe with this library.
Abstract-The SERSCIS project aims to support the use of interconnected systems of services in Critical Infrastructure (CI) applications. The problem of system interconnectedness is aptly demonstrated by 'Airport Collaborative Decision Making' (A-CDM). Failure or underperformance of any of the interlinked ICT systems may compromise the ability of airports to plan their use of resources to sustain high levels of air traffic, or to provide accurate aircraft movement forecasts to the wider European air traffic management systems. The proposed solution is to introduce further SERSCIS ICT components to manage dependability and interdependency. These use semantic models of the critical infrastructure, including its ICT services, to identify faults and potential risks and to increase human awareness of them. Semantics allows information and services to be described in such a way that makes them understandable to computers. Thus when a failure (or a threat of failure) is detected, SER-SCIS components can take action to manage the consequences, including changing the interdependency relationships between services. In some cases, the components will be able to take action autonomously -e.g. to manage 'local' issues such as the allocation of CPU time to maintain service performance, or the selection of services where there are redundant sources available. In other cases the components will alert human operators so they can take action instead. The goal of this paper is to describe a Service Oriented Architecture (SOA) that can be used to address the management of ICT components and interdependencies in critical infrastructure systems.
Abstract.A 'system of systems' (SoS) comprises many other systems operating collectively with a shared purpose. Individual system autonomy can give rise to unpredictable, and potentially undesirable, emergent behaviour. A policy is a set of rules that bounds the behaviours of entities. Policy can be expressed at various levels of abstraction. By building on existing goal-based decomposition approaches this paper proposes policy as a means of achieving safety in SoS. The decomposition of policy to lower levels of abstraction must be carried out in a consistent, complete and systematic manner. The approach is agent-oriented and emphasises the recognition of contextual assumptions (such as knowledge of other agents' behaviour) in decomposing policy. To this end we present patterns of decomposition based on KAOS tactics of refinement. The application of these patterns, expressed in the Goal Structuring Notation, is illustrated using existing civil aerospace policy (the Rules of the Air Regulations).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.