The aggregation of heterogeneous data from different institutions in cultural heritage and e-science has the potential to create rich data resources useful for a range of different purposes, from research to education and public interests. In this paper, we present the X3ML framework, a framework for information integration that handles effectively and efficiently the steps involved in schema mapping, uniform resource identifier (URI) definition and generation, data transformation, provision and aggregation. The framework is based on the X3ML mapping definition language for describing both schema mappings and URI generation policies and has a lot of advantages when compared with other relevant frameworks. We describe the architecture of the framework as well as details on the various available components. Usability aspects are discussed and performance metrics are demonstrated. The high impact of our work is B Yannis Marketakis
In many applications one has to fetch and assemble pieces of information coming from more than one source for building a semantic warehouse offering more advanced query capabilities. In this paper the authors describe the corresponding requirements and challenges, and they focus on the aspects of quality and value of the warehouse. For this reason they introduce various metrics (or measures) for quantifying its connectivity, and consequently its ability to answer complex queries. The authors demonstrate the behaviour of these metrics in the context of a real and operational semantic warehouse, as well as on synthetically produced warehouses. The proposed metrics allow someone to get an overview of the contribution (to the warehouse) of each source and to quantify the value of the entire warehouse. Consequently, these metrics can be used for advancing data/endpoint profiling and for this reason the authors use an extension of VoID (for making them publishable). Such descriptions can be exploited for dataset/endpoint selection in the context of federated search. In addition, the authors show how the metrics can be used for monitoring a semantic warehouse after each reconstruction reducing thereby the cost of quality checking, as well as for understanding its evolution over time.
Purpose – Marine species data are scattered across a series of heterogeneous repositories and information systems. There is no repository that can claim to have all marine species data. Moreover, information on marine species are made available through different formats and protocols. The purpose of this paper is to provide models and methods that allow integrating such information either for publishing it, browsing it or querying it. Aiming at providing a valid and reliable knowledge ground for enabling semantic interoperability of marine species data, in this paper the authors motivate a top level ontology, called MarineTLO and discuss its use for creating MarineTLO-based warehouses. Design/methodology/approach – In this paper the authors introduce a set of motivating scenarios that highlight the need of having a top level ontology. Afterwards the authors describe the main data sources (Fisheries Linked Open Data, ECOSCOPE, WoRMS, FishBase and DBpedia) that will be used as a basis for constructing the MarineTLO. Findings – The paper discusses about the exploitation of MarineTLO for the construction of a warehouse. Furthermore a series of uses of the MarineTLO-based warehouse is being reported. Originality/value – In this paper the authors described the design of a top level ontology for the marine domain able to satisfy the need for maintaining integrated sets of facts about marine species and thus assisting ongoing research on biodiversity. Apart from the ontology the authors also elaborated with the mappings that are required for building integrated warehouses.
BackgroundDuring recent years, X-ray microtomography (micro-CT) has seen an increasing use in biological research areas, such as functional morphology, taxonomy, evolutionary biology and developmental research. Micro-CT is a technology which uses X-rays to create sub-micron resolution images of external and internal features of specimens. These images can then be rendered in a three-dimensional space and used for qualitative and quantitative 3D analyses. However, the online exploration and dissemination of micro-CT datasets are rarely made available to the public due to their large size and a lack of dedicated online platforms for the interactive manipulation of 3D data. Here, the development of a virtual micro-CT laboratory (Micro-CTvlab) is described, which can be used by everyone who is interested in digitisation methods and biological collections and aims at making the micro-CT data exploration of natural history specimens freely available over the internet.New informationThe Micro-CTvlab offers to the user virtual image galleries of various taxa which can be displayed and downloaded through a web application. With a few clicks, accurate, detailed and three-dimensional models of species can be studied and virtually dissected without destroying the actual specimen. The data and functions of the Micro-CTvlab can be accessed either on a normal computer or through a dedicated version for mobile devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.