When ontological knowledge is acquired automatically, quality control is essential. Which part of the automatically acquired knowledge is appropriate for an application often depends on the context in which the knowledge base or ontology is used. In order to determine relevant and irrelevant or even wrong knowledge, we support the tightest possible quality assurance approach -an exhaustive manual inspection of the acquired data. By using automated reasoning, this process can be partially automatized: after each expert decision, axioms that are entailed by the already confirmed statements are automatically approved, whereas axioms that would lead to an inconsistency are declined.Starting from this consideration, this paper provides theoretical foundations, heuristics, optimization strategies and comprehensive experimental results for our approach to efficient reasoning-supported interactive ontology revision.We introduce and elaborate on the notions of revision states and revision closure as formal foundations of our method. Additionally, we propose a notion of axiom impact which is used to determine a beneficial order of axiom evaluation in order to further increase the effectiveness of ontology revision. The initial notion of impact is then further refined to take different validity ratios -the proportion of valid statements within a dataset -into account. Since the validity ratio is generally not known a priori -we show how one can work with an estimate that is continuously improved over the course of the inspection process.Finally, we develop the notion of decision spaces, which are structures for calculating and updating the revision closure and axiom impact. We optimize the computation performance further by employing partitioning techniques and provide an implementation supporting these optimizations as well as featuring a user front-end. Our evaluation shows that our ranking functions almost achieve the maximum possible automatization and that the computation time needed for each reasoning-based, automatic decision takes less than one second on average for our test dataset of over 25,000 statements.
EL is a popular description logic, used as a core formalism in large existing knowledge bases. Uniform interpolants of knowledge bases are of high interest, e.g. in scenarios where a knowledge base is supposed to be partially reused. However, to the best of our knowledge no procedure has yet been proposed that computes uniform EL interpolants of general EL terminologies. Up to now, also the bound on the size of uniform EL interpolants has remained unknown. In this article, we propose an approach to computing a finite uniform interpolant for a general EL terminology if it exists. To this end, we develop a quadratic representation of EL TBoxes as regular tree grammars. Further, we show that, if a finite uniform EL interpolant exists, then there exists one that is at most triple exponential in the size of the original TBox, and that, in the worst case, no smaller interpolants exist, thereby establishing tight worst-case bounds on their size. Beyond showing these bounds, the notions and results established in this paper also provide useful insights for designing efficient ontology reformulation algorithms, for instance, within the context of module extraction.
We discuss the problem of minimizing TBoxes expressed in the lightweight description logic EL, which forms a basis of some large ontologies like SNOMED, Gene Ontology, NCI and Galen. We show that the minimization of TBoxes is intractable (NP-complete). While this looks like a bad news result, we also provide a heuristic technique for minimizing TBoxes. We prove the correctness of the heuristics and show that it provides optimal results for a class of ontologies, which we define through an acyclicity constraint over a reference relation between equivalence classes of concepts. To establish the feasibility of our approach, we have implemented the algorithm and evaluated its effectiveness on a small suite of benchmarks.
Abstract. Three conflicting requirements arise in the context of knowledge base (KB) extraction: the size of the extracted KB, the size of the corresponding signature and the syntactic similarity of the extracted KB with the original one. Minimal module extraction and uniform interpolation assign an absolute priority to one of these requirements, thereby limiting the possibilities to influence the other two. We propose a novel technique for EL that does not require such an extreme prioritization. We propose a tractable rewriting approach and empirically compare the technique with existing approaches with encouraging results.
When ontological knowledge is acquired automatically, quality control is essential. Which part of the automatically acquired knowledge is appropriate for an application often depends on the context in which the knowledge base or ontology is used. In order to determine relevant and irrelevant or even wrong knowledge, we support the tightest possible quality assurance approach -an exhaustive manual inspection of the acquired data. By using automated reasoning, this process can be partially automatized: after each expert decision, axioms that are entailed by the already confirmed statements are automatically approved, whereas axioms that would lead to an inconsistency are declined.Starting from this consideration, this paper provides theoretical foundations, heuristics, optimization strategies and comprehensive experimental results for our approach to efficient reasoning-supported interactive ontology revision.We introduce and elaborate on the notions of revision states and revision closure as formal foundations of our method. Additionally, we propose a notion of axiom impact which is used to determine a beneficial order of axiom evaluation in order to further increase the effectiveness of ontology revision. The initial notion of impact is then further refined to take different validity ratios -the proportion of valid statements within a dataset -into account. Since the validity ratio is generally not known a priori -we show how one can work with an estimate that is continuously improved over the course of the inspection process.Finally, we develop the notion of decision spaces, which are structures for calculating and updating the revision closure and axiom impact. We optimize the computation performance further by employing partitioning techniques and provide an implementation supporting these optimizations as well as featuring a user front-end. Our evaluation shows that our ranking functions almost achieve the maximum possible automatization and that the computation time needed for each reasoning-based, automatic decision takes less than one second on average for our test dataset of over 25,000 statements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.