Properly annotated text corpora are an essential condition in constructing effective and efficient tools for natural language processing (NLP), which provide an operational solution to both theoretical and applied linguistic and informational problems. One of the main and the most complex problems of corpus annotation is resolving tag ambiguities on a specific level of annotation (morphological, syntactic, semantic, etc.).
This paper addresses the issue of ambiguity that emerges on the conceptual level, which is the most relevant text annotation level for solving informational tasks. Conceptual annotation is a special type of semantic annotation usually applied to domain corpora to address specific informational problems such as automatic classification, content and trend analyses, machine learning, machine translation, etc.
In conceptual annotation, text corpora are annotated with tags reflecting the content of a certain domain, which leads to a type of ambiguity that is different from general semantic ambiguity. It has both universal and language- and domain-specific peculiarities. This paper investigates conceptual ambiguity in a case study of a Russian-language corpus on terror attacks.
The research methodology combines automated and manual steps, comprising a) statistical and qualitative corpus analysis, b) the use of pre-developed annotation resources (a terrorism domain ontology, a Russian ontolexicon and a computer platform for conceptual annotation), c) ontological-analysis-based conceptual annotation of the corpus chosen for the case study, d) corpus-based detection and investigation of conceptual ambiguity causes, e) development and experimental study of possible disambiguation methods for some types of conceptual ambiguity.
The findings obtained in this study are specific for Russian-language terrorism domain texts, but the conceptual annotation technique and approaches to conceptual disambiguation developed are applicable to other domains and languages.