Capturing word meaning is one of the challenges of natural language processing (NLP). Formal models of meaning such as ontologies are knowledge repositories used in a variety of applications. To be effectively used, these ontologies have to be large or, at least, adapted to specific domains. Our main goal is to contribute practically to the research on ontology learning models by covering different aspects of the task. We propose probabilistic models for learning ontologies that expands existing ontologies taking into accounts both corpus-extracted evidences and structure of the generated ontologies. The model exploits structural properties of target relations such as transitivity during learning. We then propose two extensions of our probabilistic models: a model for learning from a generic domain that can be exploited to extract new information in a specific domain and an incremental ontology learning system that put human validations in the learning loop. This latter provides a graphical user interface and a humancomputer interaction workflow supporting the incremental leaning loop.
INTRODUCTIONGottfried Wilhelm Leibniz was convinced that human knowledge was like a "bazaar": a place full of all sorts of goods without any order or inventory. As in a "bazaar", searching a little piece of specific knowledge is a challenge that can last forever. Nowadays, we have powerful machines to process and collect data. These machines, combined with the human need of exchanging and sharing information, produced an incredibly large evolving collection of documents, partially shared with the World Wide Web. The Web is a modern worldwide scale knowledge "bazaar" full of any sort of information where searching specific information is a titanic task. Ontologies represent the Semantic Web's reply to the need of searching knowledge in the Web. These ontologies provide shared metadata vocabularies (Berners-Lee, T., Hendler, J., & Lassila, O., 2001). Data, documents, images, and information sources in general, described through these vocabularies, will be thus accessible as organized with explicit semantic references for humans as well as for machines. Yet, to be useful, ontologies should cover large part of human knowledge. Automatically learning these ontologies from document collections is the major challenge. Models for automatically learning semantic networks of words from texts use both corpus-extracted evidences and existing language resources (Basili, Gliozzo, & Pennacchiotti, 2007). All these models rely on two hypotheses: Distributional Hypothesis (DH) (Harris, 1964) and Lexico-syntactic patterns exploitation hypothesis (LSP) (Robison, 1970). While these are powerful tools to extract relations among concepts using texts, models based on these hypotheses do not explicitly exploit structural properties of target relations when learning taxonomies or semantic networks of words. DH models intrinsically use structural properties of semantic networks of words such as transitivity, but these models cannot be applied for l...