Abstract. In the area of semantic technologies, benchmarking and systematic evaluation is not yet as established as in other areas of computer science, e.g., information retrieval. In spite of successful attempts, more effort and experience are required in order to achieve such a level of maturity. In this paper, we report results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching. The goal of this work is twofold: on the one hand, we document the state of the art in evaluating ontology matching methods and provide potential participants of the initiative with a better understanding of the design and the underlying principles of the OAEI campaigns. On the other hand, we report experiences gained in this particular area of semantic technologies to potential developers of benchmarking for other kinds of systems. For this purpose, we describe the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows, provide a global view on the results of the campaigns carried out from 2005 to 2010 and discuss upcoming trends, both specific to ontology matching and generally relevant for the evaluation of semantic technologies. Finally, we argue that there is a need for a further automation of benchmarking to shorten the feedback cycle for tool developers.
Alignments represent correspondences between entities of two ontologies. They are produced from the ontologies by ontology matchers. In order for matchers to exchange alignments and for applications to manipulate matchers and alignments, a minimal agreement is necessary. The Alignment API provides abstractions for the notions of network of ontologies, alignments and correspondences as well as building blocks for manipulating them, such as matchers, evaluators, renderers and parsers. We recall the building blocks of this API and present here the version 4 of the Alignment API through some of its new features: ontology proxys, the expressive alignment language EDOAL and evaluation primitives.
No abstract
Simple ontology alignments, largely studied, link one entity of a source ontology to one entity of a target ontology. One of the limitations of these alignments is, however, their lack of expressiveness which can be overcome by complex alignments. Although different complex matching approaches have emerged in the literature, there is a lack of complex reference alignments on which these approaches can be systematically evaluated. This paper proposes two sets of complex alignments between 10 pairs of ontologies from the well-known OAEI conference simple alignment dataset. The methodology for creating the alignment sets is described and takes into account the use of the alignments for two tasks: ontology merging and query rewriting. The ontology merging alignment set contains 313 correspondences and the query rewriting one 431. We report an evaluation of state-of-the art complex matchers on the proposed alignment sets.been carried out over the last fifteen years in the context of the Ontology Alignment Evaluation Campaigns (OAEI) 1 . Even though this well-known campaign proposes a task-oriented benchmark (the OA4QA track [28]), it does not propose a complex alignment benchmark.This paper proposes two alignment sets to extend the OAEI conference track dataset [3,36] with complex alignments for two task purposes: ontology merging and query rewriting. The methodology for creating the alignment sets is described and takes into account the use of the alignments for the two targeted tasks. Here we extend the work presented in [33] and in [31] by enriching the alignment sets with new pairs of ontologies and by considering the task for which the alignment is needed. We also extend the work in [31] and by adding an evaluation of three systems [23,24,13]. We extend the evaluation of the work in [33] by adding a new system described in [13] and by evaluating all the three systems on the ten pairs of ontologies for each alignment set.The paper is organised as follows. After giving the background on ontology matching ( §2) and discussing related work ( §3), we describe the methodology to create the alignments ( §4), the alignments themselves and their use for the evaluation of approaches ( §5). We conclude with a discussion on the proposal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.