The test system is a vital part of delivering a verified product to the end customer. The test system used in Kongsberg Defence and Aerospace (KDA) to test missile products today needs to change to be able to cope with future requirements for faster project execution and running more projects simultaneously. This article uses a Systems Thinking approach to see the bigger picture and to ensure understanding of the entire problem domain. The system consists of the following structural elements: Data Preparation System, Mission Planning System, Simulators, Data Analysis System and Storage System. The stakeholders of the test system are testers, system owners, project managers, company, customers, government and suppliers. Several possible value added processes are foreseen to make this necessary transition; automation of test execution and test analysis to avoid bottlenecks, verification on both core product and adaption product level for modularity, combining test arena input over different systems/sub‐systems/components for re‐use of data, and Machine Learning only to trigger necessary manual analysis. These changes will influence the system in several ways and levels, which a possible implementation need to consider. Regarding aspects like facilities, environment, security and safety not to cause issues for the changes in question is important. The main steps in the current test process is that the test system will provide scenario data to the tester to run test scenario to generate test results to the analyzer to perform test results analysis to achieve a verified product. The test structure is a limiting factor in the process of ensuring test maturity. The analysis structure is a limiting factor in reaching the desired verification level. The test structure and analysis structure are leverage points of the test system, which can significantly change the test system. The test system should have an automated test execution and test results analysis process, not requiring tedious manual operations. The automated test process should further introduce Machine Learning to change the focus of everything to managing the exceptions. KDA will increase its probability of success in future projects by applying the proposed changes to its test system.
System integration testing in the defense and aerospace industry is becoming increasingly complex. The long lifetime of the system drives the need for sub‐system modifications throughout the system life cycle. The manufacturer must verify that these modifications do not negatively affect the system's behavior. Hence, an extensive test regime is required to ensure reliability and robustness of the system. System behaviors that emerge from the interaction of sub‐systems can be difficult to pre‐define and capture in a test setup using acceptance criteria. Typical challenges with current test practice include late detection of unwanted system behavior, high cost of repetitive manual processes, and risk of release delays because of late error detection. This paper reviews the state of practice at a case company in the defense and aerospace industry. We use an industry‐as‐laboratory approach to explore the situation in the company. The research identifies the challenges and attempts to quantify the potential gain from improving the current practice. We find that the current dependency on manual analysis generates resources ‐and scheduling constraints and communication issues that hinder efficient detection of system emergent behavior. We explore two approaches to automate anomaly detection of system behavior from test data. The first approach looks at anomaly detection in a top‐down approach to give an indication of the system integrity. The second approach uses anomaly detection on system parts, resulting in the ability to localize the root causes. The work lays the foundation for further research of automated anomaly detection in system testing.
Modern product development often generates systems of high complexity that are prone to emergent behavior. The industry has a need to establish better practices to detect inherent emergent behavior when engineering such systems. Philosophers and researchers have debated emergence throughout history, tracing to the time of the Greek philosopher Aristotle (384–322 B.C.) and current literature has both philosophical and practical examples of emergence in modern systems. In this review paper, we investigate the phenomenon of emergent behavior in engineered systems. Our aim is to describe emergence in engineered systems and propose methods to detect it, based on literature. Emergence is in general explained as dynamic behavior seen at macro level that cannot be traced back to the micro level. Emergence can be known or unknown in combination with positive or negative. We find that best practices to engineer complicated systems should contain a sensible suite of traditional approaches and methods, while best practices to engineer complex systems need extensions to this considering a new paradigm using incentives to guide system behavior rather than testing it up‐front.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.