To achieve knowledge superiority in today’s operations interoperability is the key. Budget restrictions as well as the complexity and multiplicity of threats combined with the fact that not single nations but whole areas are subject to attacks force nations to collaborate and share information as appropriate. Multiple data and information sources produce different kinds of data, real time and non-real time, in different formats that are disseminated to the respective command and control level for further distribution. The data is most of the time highly sensitive and restricted in terms of sharing. The question is how to make this data available to the right people at the right time with the right granularity. The Coalition Shared Data concept aims to provide a solution to these questions. It has been developed within several multinational projects and evolved over time. A continuous improvement process was established and resulted in the adaptation of the architecture as well as the technical solution and the processes it supports. Coming from the idea of making use of existing standards and basing the concept on sharing of data through standardized interfaces and formats and enabling metadata based query the concept merged with a more sophisticated service based approach. The paper addresses concepts for information sharing to facilitate interoperability between heterogeneous distributed systems. It introduces the methods that were used and the challenges that had to be overcome. Furthermore, the paper gives a perspective how the concept could be used in the future and what measures have to be taken to successfully bring it into operations
As globalization affects most aspects of modern life, challenges of quick and flexible data sharing apply to many different domains. To protect a nation's security for example, one has to look well beyond borders and understand economical, ecological, cultural as well as historical influences. Most of the time information is produced and stored digitally and one of the biggest challenges is to receive relevant readable information applicable to a specific problem out of a large data stock at the right time.These challenges to enable data sharing across national, organizational and systems borders are known to other domains (e.g., ecology or medicine) as well. Solutions like specific standards have been worked on for the specific problems. The question is: what can the different domains learn from each other and do we have solutions when we need to interlink the information produced in these domains?A known problem is to make civil security data available to the military domain and vice versa in collaborative operations. But what happens if an environmental crisis leads to the need to quickly cooperate with civil or military security in order to save lives? How can we achieve interoperability in such complex scenarios?The paper introduces an approach to adapt standards from one domain to another and lines out problems that have to be overcome and limitations that may apply.
To meet todays challenges in ISR (Intelligence, Surveillance and Reconnaissance) defense coalitions Systems of Systems (SOS) architectures are needed that are flexible, function in a networked environment and support relevant operational doctrine and processes. To enable the distributed production of intelligence in networked operations the Intelligence Cycle and Joint ISR (JISR) provide process descriptions that adhere to multinational and multisystem collaboration. An interoperable SOS architecture supporting those processes needs to make use of standards for data/information management with a special focus on dissemination.The NATO ISR Interoperability Architecture (NIIA) and supportive standards (STANAGS-standardization agreements) have been specified to provide a solution to these needs. In terms of data distribution, STANAG 4559 is the core standard of relevance here. It defines a concept, data-and information models, interfaces and services to support information dissemination according to JISR. The current specification for synchronization of JISR results however has some deficiencies in terms of implementation complexity, flexibility, robustness and performance. Thus, there is a need for a new approach to data dissemination in networks implementing STANAG 4559 that enables the usage of all aspects currently supported by this standard but seeks to solve the known issues.Thereupon this paper presents requirements for data dissemination in a JISR enterprise, derives key performance indicators (KPIs), identifies possible technical approaches and finally defines a new solution based on the concept of Hash Tries. Here a tree-based data structure is organized based on hashes of nodes, which allows a quick identification of changes in replicated data.
In the domain of civil and military surveillance and reconnaissance sensors and exploitation systems of different producers are used to achieve an overall picture of a critical situation. In today's multinational cooperation's on security and peace keeping it is essential to be able to share data that is produced by one national asset with other systems or even other nations. Therefore interoperability has to be established between those various systems, since each of them is currently dealing with different metadata/data formats and interfaces. Within the multinational intelligence and surveillance project MAJIIC (Multi-Sensor Aerospace-Ground Joint ISR Interoperability Coalition) various standards have been developed enable sharing data. Those range from common data representation (e.g. imagery or radar data), metadata models and communication protocols to Coalition Shared Data (CSD) servers. The CSD servers provide a decentralized storage facility in which the standardized information is persisted and available to all participants through synchronization. Using standardized client interfaces relevant data can be found and retrieved from the storage facility. By the approach of standardization, many of the interoperability issues have been overcome on a data representation level, resulting in the ability to share the data. However, to be able to understand the data and translate it into information more work has to be done. The integration of the various systems into a single coherent approach needs to be continued on the process, semantic interpretation and pragmatic level in order to achieve full interoperability. The usage and semantic of the metadata has to be defined as well as user roles and responsibilities. Rules have to be established to enable the correct. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. interpretation and validation of data. The paper describes the exercise based approach that is used in the project and reflects on the necessity of a multilevel approach to achieve interoperability
Today's sensors and analysis systems produce huge amounts of data and one of the main challenges is how to enable users to find relevant data in time. Software systems supporting queries must provide suitable filter mechanisms. Through this filtering, the full dataset can be reduced to a manageable amount of relevant data. Respective filter criteria can be chained together in the form of Boolean expressions using disjunctions and conjunctions, and the resulting hierarchical structures can be further simplified by parentheses.The practical use case that we present in this publication consists of a web application to access potentially large data sets using a defined set of metadata based catalogue entries. This application currently supports the specification of filter criteria, which can be concatenated by the user through a single global conjunction in a flat hierarchy. The underlying query language supports more complex queries using any combination of conjunctions, disjunctions and brackets. There is a user requirement to extend the expressiveness of the client search queries so that the full scope of the query language can be leveraged meaningfully. One problem is how such complex search queries can be graphically structured and visualized in a clear and comprehensive way.There exist different approaches for the graphical visualization of program code and query languages. Some of these approaches also support the graphical editing of these representations by the user. Examples for such frameworks or tools are Blockly, Scratch or Node-RED. In this publication, an analysis of the applicability of such frameworks in the existing web application is presented. For this purpose, operational constraints and exclusion criteria that must be fulfilled for the use in our application are identified. This results in a selection of a framework for a future implementation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.