Causation relations are a pervasive feature of human language. Despite this, the automatic acquisition of causal information in text has proved to be a difficult task in NLP. This paper provides a method for the automatic detection and extraction of causal relations. We also present an inductive learning approach to the automatic discovery of lexical and semantic constraints necessary in the disambiguation of causal relations that are then used in question answering. We devised a classification of causal questions and tested the procedure on a QA system.
The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of semantic relations in text. We present the development and evaluation of a semantic analysis task: automatic recognition of relations between pairs of nominals in a sentence. The task was part of SemEval-2007, the fourth edition of the semantic evaluation event previously known as SensEval. Apart from the observations we have made, the long-lasting effect of this task may be a framework for comparing approaches to the task. We introduce the problem of recognizing relations between nominals, and in particular the process of drafting and refining the definitions of the semantic relations. We show how we created the training and test data, list and briefly describe the 15 participating systems, discuss the results, and conclude with the lessons learned in the course of this exercise.
An important problem in knowledge discovery from text is the automatic extraction of semantic relations. This paper presents a supervised, semantically intensive, domain independent approach for the automatic detection of part-whole relations in text. First an algorithm is described that identifies lexico-syntactic patterns that encode part-whole relations. A difficulty is that these patterns also encode other semantic relations, and a learning method is necessary to discriminate whether or not a pattern contains a part-whole relation. A large set of training examples have been annotated and fed into a specialized learning system that learns classification rules. The rules are learned through an iterative semantic specialization (ISS) method applied to noun phrase constituents. Classification rules have been generated this way for different patterns such as genitives, noun compounds, and noun phrases containing prepositional phrases to extract part-whole relations from them. The applicability of these rules has been tested on a test corpus obtaining an overall average precision of 80.95% and recall of 75.91%. The results demonstrate the importance of word sense disambiguation for this task. They also demonstrate that different lexico-syntactic patterns encode different semantic information and should be treated separately in the sense that different clarification rules apply to different patterns.
This paper presents an approach for detecting semantic relations in noun phrases. A learning algorithm, called semantic scattering, is used to automatically label complex nominals, genitives and adjectival noun phrases with the corresponding semantic relation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.