We develop the first bisimulation-based method of concept learning, called BBCL, for knowledge bases in description logics (DLs). Our method is formulated for a large class of useful DLs, with well-known DLs like ALC, SHIQ, SHOIQ, SROIQ. As bisimulation is the notion for characterizing indiscernibility of objects in DLs, our method is natural and very promising.
In description logic-based information systems, objects are described not only by attributes but also by binary relations between them. This work studies concept learning in such information systems. It extends the bisimulationbased concept learning method of Nguyen and Szałas (Rough sets and intelligent systems. Springer, Berlin, pp 517-543, 2013). We take attributes as basic elements of the language. Each attribute may be discrete or numeric. A Boolean attribute is treated as a concept name. This approach is more general and suitable for practical information systems based on description logic than the one of Nguyen and Szałas (Rough sets and intelligent systems. Springer, Berlin, pp 517-543, 2013). As further extensions, we also allow data roles and the concept constructors "functionality" and "unqualified number restrictions". We formulate and prove an important theorem on basic selectors. We also present a domain partitioning method based on information gain that has been used for our implementation of the method. Apart from basic selectors and simple selectors, we introduce a new kind of selectors, called extended selectors. The evaluation
Concept learning in description logics (DLs) is similar to binary classification in traditional machine learning. The difference is that in DLs objects are described not only by attributes but also by binary relationships between objects. In this paper, we develop the first bisimulation-based method of concept learning in DLs for the following setting: given a knowledge base KB in a DL, a set of objects standing for positive examples and a set of objects standing for negative examples, learn a concept C in that DL such that the positive examples are instances of C w.r.t. KB, while the negative examples are not instances of C w.r.t. KB. We also prove soundness of our method and investigate its C-learnability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.