<p>Multi-vendor operations of uncrewed vehicles as part of the observations, surveillance and surveying are already daily practice in many fields. The popularity of the integration platforms that manage multiple, sometimes simultaneously, systems is also already proven by the integration platforms' popularity. With new European regulations for the drone industry and the growing popularity of various (ground, water surface, underwater, aerial) systems exploitations, the need for situation awareness and planning that will be flexible and vendor lock-in free is leveraged. However, despite several recent efforts and some popular specifications that aim at becoming de-facto standards, civil operations' interoperability challenge is unsolved. To assess whether a shared data model is suitable for multi-domain, multi-heterogeneous vehicle use, and challenge it with real applications and demonstrate the exchange of command and control information, OGC members started an Interoperability Experiment in 2022. IE is based on a data model developed by Kongsberg Geospatial and partners under the Standards-based UxS Interoperability Test-bed (SUIT). The IE considers those other standards and specifications which were used in the SUIT work as well as other Command and Control practices from the aviation and marine communities. The presentation depicts selected use cases and scenarios and outlines the information model of the localized situation awareness and mission planning and operations. Being specific for autonomous vehicle operations, they extend the needs of generic geospatial representations. Authors will explain relations to other similar models like (LSTS, MavLink, UMAA, STANAG 4586, JAUS, C2INav) and modern geospatial data exchange standards like OGC SensorThings, Features, Moving Features, GeoPose.</p>
<p>In the distributed heterogeneous environmental data ecosystems, the number of data sources, volume and variances of derivatives, purposes, formats, and replicas are increasingly growing. In theory, this can enrich the information system as a whole, enabling new data value to be revealed via the combination and fusion of several data sources and data types, searching for further relevant information hidden behind the variety of expressions, formats, replicas, and unknown reliability. It is now visible how complex data alignment is, and even more, it is not always justified due to capacity and business issues. One of the challenging, but also most rewarding approaches is semantic alignment, which promises to fill the information gap of data discovery and joins. To formalise one, an inevitable enabler is an aligned, linked, and machine readable data model enabling the specification of relations between data elements generated information. The Iliad - digital twins of the ocean are cases of this kind, where in-situ data and citizen science observations are mixed with multidimensional environmental data to enable data science and what-if models implementation and to be integrated into even broader ecosystems like the European Digital Twin Ocean (EDITO) and European Data Spaces. An Ocean Information Model (OIM) that will enable traversals and profiles is the semantic backbone of the ecosystem. Defined as the multi-level ontology, it will explain data using well known generic (Darwin Core, WoT), spatio-temporal (SOSA/SSN, OGC Geo, W3C Time, QUDT, W3C RDF Data Cube, WoT) and domain (WORMS, AGROVOC) ontologies. Machine readability and unambiguity allow for both automated validation and some translations.<br>On the other hand, efficient use of this requires yet another skill in data management and development besides GIS, ICT and domain expertise. In addition, as the semantics used in the data and metadata have not yet been stabilised on the implementation level, it introduces a few more flexibilities of data expression. Following the GEO data sharing and data management principles along with FAIR, CARE and TRUST, the environmental data is prepared for harmonisation. Furthermore, to ease the entry and to harmonise conventions, the authors introduce a multi-touchpoint data value chain API suite with an aligned approach to semantically enrich, entail and validate data sets such as observations streams in JSON or JSON-LD based on OIM, through storage and scientific data in NetCDF to exposing this semantically aligned data via the newly endorsed and already successful OGC Environmental Data Retrieval API. The practical approach is supported by a ready-to-use toolbox of components that presents portable tools to build and validate multi-source geospatial data integrations keeping track of the information added during mesh-up and predictions and what-if implementations.</p>
Abstract. The concepts of smart cities and digital twins are more and more investigated and accepted as critical tools for the cities of the future. In order to make them a concrete reality, several aspects related to the management of data have to be considered: technical issues to collect, retrieve, exchange, analyse and process data; data sovereignty issues; data semantics, features and metadata specifications. In current projects and higher level frameworks specifications, some of these issues are explored and solutions are proposed for subset of them. However, an overall framework connecting those aspects in a unique model has not been defined and tested yet. In this paper, we propose a reference model to build an interoperable system of systems supporting the implementation of smart cities and digital twins. We start from reviewing current experiences and in particular considering the OGC standards and initiatives intended to provide Open solutions for specific parts of the framework.
<p>The FAIR data principles form the core OGC mission that renders in the open geospatial standards and the open-data initiatives that use them. Although OGC is best known for the technical interoperability, the domain modelling and semantic level play an inevitable role in the standards definition and the exploitation. On the one hand, we have a growing number of specialised profiles and implementations that selectively use the OGC modular specification model components. On the other hand, various domain ontologies exist already, enabling a better understanding of the data. As there could be multiple semantic representations, common data models support cross ontology traverses. Defining the service in the technical-semantic space requires fixing some flexibility points, including optional and mandatory elements, additional constraints and rules, and content including normalised vocabularies to be used.</p><p>The proposed solution of the OGC Definition Server is a multi-purpose application built around the triple store database engine integrated with the ingestion, validation, and entailment tools and exposing customized end-points. The models are available in the human-readable format and machine-2-machine aimed encodings. For manual processes, it enables understanding the technical and semantic definitions/relationships between entities. Programmatic solutions benefit from a precise referential system, validations, and entailment.</p><p>Currently, OGC Definition Server is hosting several types of definitions covering:</p><ul><li>Register of OGC bodies, assets, and its modules</li> <li>Ontological common semantic models (e.g., for Agriculture)</li> <li>Dictionaries of subject domains (e.g., PipelineML Codelists)</li> </ul><p>In practice, that is a step forward in defining the bridge between conceptual and logical models. The concepts can be expressed as instances of various ontological classes and interpreted within multiple contexts, with the definition translated into entities, relationships, and properties. In the future, it is linking the data to the reference model and external ontologies that may be even more significant. Doing so can greatly improve the quality of the knowledge produced based on the collected data. Ability to verify the research outcomes and explainable AI are just two examples where a precise log of inferences and unambiguous semantic compatibility of the data will play a key role.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.