Proceedings of the 28th Annual Meeting on Association for Computational Linguistics - 1990
DOI: 10.3115/981823.981857
|View full text |Cite
|
Sign up to set email alerts
|

Noun classification from predicate-argument structures

Abstract: A method of determining the similarity of nouns on the basis of a metric derived from the distribution of subject, verb and object in a large text corpus is described. The resulting quasi-semantic classification of nouns demonstrates the plausibility of the distributional hypothesis, and has potential application to a variety of tasks, including automatic indexing, resolving nominal compounds, and determining the scope of modification.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
253
0
4

Year Published

1999
1999
2013
2013

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 342 publications
(257 citation statements)
references
References 6 publications
0
253
0
4
Order By: Relevance
“…-Hindle's method: The method described in [7] is used. Whereas he deals only with subjects and objects as verb-noun co-occurrence, we used all the kinds of co-occurrence mentioned in Sect.…”
Section: Comparison Experiments With Conventional Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…-Hindle's method: The method described in [7] is used. Whereas he deals only with subjects and objects as verb-noun co-occurrence, we used all the kinds of co-occurrence mentioned in Sect.…”
Section: Comparison Experiments With Conventional Methodsmentioning
confidence: 99%
“…To acquire synonyms automatically, contextual features of words, such as co-occurrence and modification are extracted from large corpora and often used. Hindle [7], for example, extracted verb-noun relationships of subjects/objects and their predicates from a corpus and proposed a method to calculate similarity of two words based on their mutual information. Although methods based on such raw co-occurrences are simple yet effective, in a naive implementation some problems arise: namely, noises and sparseness.…”
Section: Introductionmentioning
confidence: 99%
“…Linguistic models and machine learning techniques are used in automatically detecting relations, patterns, and structures in textual data at various granularities. At the word level, relationships among lexical items can be detected by using grammatical knowledge and statistical methods on large text corpora (Hindle, 1990;Hearst 1998). Moving up to the sentence/discourse level, based on theories of rhetorical and discourse structures (Mann and Thompson, 1988;Polanyi, 1988;Grosz et al, 1995), much work has been done on automatically detecting relationships between sentences and other discourse units (Marcu and Echibabi, 2002;Burstein et al 2003;Chan, 2004).…”
Section: Detecting Relationships In Textual Datamentioning
confidence: 99%
“…The similarity measure is a weighted Tanimoto measure, a version of which was also used by Grefenstette (1992Grefenstette ( , 1994. Word association is measured by mutual information, following earlier work on word similarity by Hindle (1990).…”
Section: Statistical Similarity and Clustering For Disambiguation Andmentioning
confidence: 99%