Organizations are increasingly faced with the challenge of architecting complex systems that must operate within a System of Systems context. While network science has offered usefully clear insights into product and system architectures, we seek to extend these approaches to evaluate enterprise system architectures. Here, we explore the application of graph‐theoretic methods to the analysis of two real‐world enterprise architectures (a military communications system and a search and rescue system) and to assess the relative importance of different architecture components. For both architectures, different topological measures of component significance identify differing network vertices as important. From this, we identify several significant challenges a system architect needs to be cognisant of when employing graph‐theoretic approaches to evaluate architectures; finding suitable abstractions of heterogeneous architectural elements and distinguishing between network‐structural properties and system‐functional properties. These challenges are summarized as five guiding principles for utilizing network science concepts for enterprise architecture evaluation.
Evaluating the complexity of an engineered system is challenging for any organization, even more so when operating in a System‐of‐Systems (SoS) context. Here, we analyze one particular decision support tool as an illustratory case study. This tool has been used for several years by Thales Group to evaluate system complexity across a variety of industrial engineering projects. The case study is informed by analysis of semistructured interviews with systems engineering experts within the Thales Group. This analysis reveals a number of positive and negative aspects of (i) the tool itself and (ii) the way in which the tool is embedded operationally within the wider organization. While the first set of issues may be solved by making improvements to the tool itself, informed by further comparative analysis and growing literature on complexity evaluation, the second “embedding challenge” is distinct, seemingly receiving less attention in the literature. In this paper, we focus on addressing this embedding challenge, by introducing a complexity evaluation framework, designed according to a set of principles derived from the case study analysis; namely that any effective complexity evaluation activity should feature collaborative effort toward building an evaluation informed by a shared understanding of contextually relevant complexity factors, iterative (re‐)evaluation over the course of a project, and progressive refinement of the complexity evaluation tools and processes themselves through linking project evaluations to project outcomes via a wider organizational learning cycle. The paper concludes by considering next steps including the challenge of assuring that such a framework is being implemented effectively.
Despite a wealth of system architecture frameworks and methodologies available, approaches to evaluate the robustness and resiliency of architectures for complex systems or systems of systems are few in number. As a result, system architects may turn to graph‐theoretic methods to assess architecture robustness and vulnerability to cascading failure. Here, we explore the application of such methods to the analysis of two real‐world system architectures (a military communications system and a search and rescue system). Both architectures are found to be relatively robust to random vertex removal but more vulnerable to targeted vertex removal. Hardening strategies for limiting the extent of cascading failure are demonstrated to have varying degrees of effectiveness. However, in taking a network perspective on architecture robustness and susceptibility to cascade failure, we find several significant challenges that impede the straightforward use of graph‐theoretic methods. Most fundamentally, the conceptualization of failure dynamics across heterogeneous architectural entities requires considerable further investigation.
How should organizations approach the evaluation of system complexity at the early stages of system design in order to inform decision making? Since system complexity can be understood and approached in several different ways, such evaluation is challenging. In this study, we define the term “system complexity factors” to refer to a range of different aspects of system complexity that may contribute differentially to systems engineering outcomes. Views on the absolute and relative importance of these factors for early–life cycle system evaluation are collected and analyzed using a qualitative questionnaire of International Council on Systems Engineers (INCOSE) members (n = 55). We identified and described the following trends in the data: there is little between‐participant agreement on the relative importance of system complexity factors, even for participants with a shared background and role; participants tend to be internally consistent in their ratings of the relative importance of system complexity factors. Given the lack of alignment on the relative importance of system complexity factors, we argue that successful evaluation of system complexity can be better ensured by explicit determination and discussion of the (possibly implicit) perspective(s) on system complexity that are being taken.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.