We adapt existing approaches for privacy-preserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL EL. We provide a chacterization of compliance of such ABoxes w.r.t. EL policies, and show how optimal compliant anonymisations of ABoxes that are non-compliant can be computed. This work extends previous work on privacy-preserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.
The application of automated reasoning approaches to Description Logic (DL) ontologies may produce certain consequences that either are deemed to be wrong or should be hidden for privacy reasons. The question is then how to repair the ontology such that the unwanted consequences can no longer be deduced. An optimal repair is one where the least amount of other consequences is removed. Most of the previous approaches to ontology repair are of a syntactic nature in that they remove or weaken the axioms explicitly present in the ontology, and thus cannot achieve semantic optimality. In previous work, we have addressed the problem of computing optimal repairs of (quantified) ABoxes, where the unwanted consequences are described by concept assertions of the lightweight DL $$\mathcal {EL}$$
EL
. In the present paper, we improve on the results achieved so far in two ways. First, we allow for the presence of terminological knowledge in the form of an $$\mathcal {EL}$$
EL
TBox. This TBox is assumed to be static in the sense that it cannot be changed in the repair process. Second, the construction of optimal repairs described in our previous work is best case exponential. We introduce an optimized construction that is exponential only in the worst case. First experimental results indicate that this reduces the size of the computed optimal repairs considerably.
This work is initially motivated by a privacy scenario in which the confidential information about persons or its properties formulated in description logic (DL) ontologies should be kept hidden. We investigate procedures to detect whether this confidential information can be disclosed in a certain situation by using DL formalisms. If it is the case that this information can be deduced from the ontologies, which implies certain privacy policies are not fulfilled, then one needs to consider methods to repair these ontologies in a minimal way such that the modified ontologies complies with the policies. However, privacy compliance itself is not enough if a possible attacker can also obtain relevant information from other sources, which together with the modified ontologies might violate the privacy policy. This article provides a summary of studies and results from Adrian Nuradiansyah’s Ph.D. dissertation that are corresponding to the addressed problem above with a special emphasis on the investigations on the worst-case complexities of those problems as well as the complexity of the procedures and algorithms solving the problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.