Abstract. This article addresses a number of limitations of state-of-the-art methods of Ontology Alignment: 1) they primarily address concepts and entities while relations are less well-studied; 2) many build on the assumption of the 'well-formedness' of ontologies which is unnecessarily true in the domain of Linked Open Data; 3) few have looked at schema heterogeneity from a single source, which is also a common issue particularly in very large Linked Dataset created automatically from heterogeneous resources, or integrated from multiple datasets. We propose a domain-and language-independent and completely unsupervised method to align equivalent relations across schemata based on their shared instances. We introduce a novel similarity measure able to cope with unbalanced population of schema elements, an unsupervised technique to automatically decide similarity threshold to assert equivalence for a pair of relations, and an unsupervised clustering process to discover groups of equivalent relations across different schemata. Although the method is designed for aligning relations within a single dataset, it can also be adapted for cross-dataset alignment where sameAs links between datasets have been established. Using three gold standards created based on DBpedia, we obtain encouraging results from a thorough evaluation involving four baseline similarity measures and over 15 comparative models based on variants of the proposed method. The proposed method makes significant improvement over baseline models in terms of F1 measure (mostly between 7% and 40%), and it always scores the highest precision and is also among the top performers in terms of recall. We also make public the datasets used in this work, which we believe make the largest collection of gold standards for evaluating relation alignment in the LOD context.
This paper presents two contributions to the field of Ontology Evaluation. First, a live catalogue of pitfalls that extends previous works on modeling errors with new pitfalls resulting from an empirical analysis of over 693 ontologies. Such a catalogue classifies pitfalls according to the Structural, Functional and Usability-Profiling dimensions. For each pitfall, we incorporate the value of its importance level (critical, important and minor) and the number of ontologies where each pitfall has been detected. Second, OOPS! (OntOlogy Pitfall Scanner!), a tool for detecting pitfalls in ontologies and targeted at newcomers and domain experts unfamiliar with description logics and ontology implementation languages. The tool operates independently of any ontology development platform and is available online. The evaluation of the system is provided both through a survey of users' satisfaction and worldwide usage statistics. In addition, the system is also compared with existing ontology evaluation tools in terms of coverage of pitfalls detected.
Abstract. Ontology quality can be affected by the difficulties involved in ontology modelling which may imply the appearance of anomalies in ontologies. This situation leads to the need of validating ontologies, that is, assessing their quality and correctness. Ontology validation is a key activity in different ontology engineering scenarios such as development and selection. This paper contributes to the ontology validation activity by proposing a web-based tool, called OOPS!, independent of any ontology development environment, for detecting anomalies in ontologies. This tool will help developers to improve ontology quality by automatically detecting potential errors. Keywords: ontology, pitfalls, ontology evaluation, ontology validation IntroductionThe emergence of ontology development methodologies during the last decades has facilitated major progress, transforming the art of building ontologies into an engineering activity. The correct application of such methodologies benefits the ontology quality. However, such quality is not always guaranteed because developers must tackle a wide range of difficulties and handicaps when modelling ontologies [1,2,11,15]. These difficulties can imply the appearance of anomalies in ontologies. Therefore, ontology evaluation, which checks the technical quality of an ontology against a frame of reference [18], plays a key role in ontology engineering projects. Ontology evaluation, which can be divided into validation and verification [18], is a complex ontology engineering process mainly due to two reasons. The first one is its applicability in different ontology engineering scenarios, such as development and reuse, and the second one is the abundant number of approaches and metrics [16].One approach for validating ontologies is to analyze whether the ontology is conform to ontology modelling best practices; in other words, to check whether the ontologies contain anomalies or pitfalls. In this regard, a set of common errors made by developers during the ontology modelling is described in [15]. Moreover, in [10] a classification of errors identified during the evaluation of different features such as consistency, completeness, and conciseness in ontology taxonomies is provided. Finally,in [13] authors identify an initial catalogue of common pitfalls.In addition, several tools have been developed to alleviate the dull task of evaluating ontologies. These tools support different approaches like (a) to check the consistency of the ontology, (b) to check the compliance with the ontology language used to
Linked Data is the key paradigm of the Semantic Web, a new generation of the World Wide Web that promises to bring meaning (semantics) to data. A large number of both public and private organizations have published their data following the Linked Data principles, or have done so with data from other organizations. To this extent, since the generation and publication of Linked Data are intensive engineering processes that require high atten-tion in order to achieve high quality, and since experience has shown that existing general guidelines are not always sufficient to be applied to every domain, this paper presents a set of guidelines for generating and publish-ing Linked Data in the context of energy consumption in buildings (one aspect of Building Information Models). These guidelines offer a comprehensive description of the tasks to perform, including a list of steps, tools that help in achieving the task, various alternatives for performing the task, and best practices and recommendations. Fur-thermore, this paper presents a complete example on the generation and publication of Linked Data about energy consumption in buildings, following the presented guidelines, in which the energy consumption data of council sites (e.g., buildings and lights) belonging to the Leeds City Council jurisdiction have been generated and pub-lished as Linked Data.
Existing smart city ontologies allow representing different types of city-related data from cities. They have been developed according to different ontological commitments and hence do not share a minimum core model that would facilitate interoperability among smart city information systems. In this work, a survey has been carried out in order to study available smart city ontologies and to identify the domains they are representing. Taking into account the findings of the survey and a set of ontological requirements for smart city data, a list of ontology design patterns is proposed. These patterns aim to be easily replicated and provide a minimum set of core concepts in order to guide the development of smart city ontologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.