Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the ACL - ACL '06 2006
DOI: 10.3115/1220175.1220220
|View full text |Cite
|
Sign up to set email alerts
|

Selection of effective contextual information for automatic synonym acquisition

Abstract: Various methods have been proposed for automatic synonym acquisition, as synonyms are one of the most fundamental lexical knowledge. Whereas many methods are based on contextual clues of words, little attention has been paid to what kind of categories of contextual information are useful for the purpose. This study has experimentally investigated the impact of contextual information selection, by extracting three kinds of word relationships from corpora: dependency, sentence co-occurrence, and proximity. The e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
12
0
1

Year Published

2008
2008
2018
2018

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 10 publications
1
12
0
1
Order By: Relevance
“…We used the WordNet similarity of words as used by Hagiwara et al [9] to evaluate our model. WordNet similarity is based on the thesaurus tree structure in WordNet.…”
Section: Wordnet Similaritymentioning
confidence: 99%
“…We used the WordNet similarity of words as used by Hagiwara et al [9] to evaluate our model. WordNet similarity is based on the thesaurus tree structure in WordNet.…”
Section: Wordnet Similaritymentioning
confidence: 99%
“…One study [8] integrated the pattern-based and distributional similarity methods to acquire lexical entailment. Another study [25] investigated the impact of contextual information selection for automatic synonym acquisition by extracting 3 kinds of contextual information, dependency, sentence cooccurrence, and proximity, from 3 different corpora. The authors proposed that while dependency relations and proximity perform relatively well by themselves, the combination of two or more kinds of contextual information gives more stable results.…”
Section: Introductionmentioning
confidence: 99%
“…The first step is to represent each given expression with a set of co-occurring expressions in the relevant corpus. For instance, adjacent word n-grams [7,43,47], nominal arguments of verb phrases [13,40,52,54,55], modifiers and modified words [26,56], and even indirect dependencies [27] have been used. Then, the weight for each feature is adjusted.…”
Section: Automatic Paraphrase Acquisition From Monolingual Corporamentioning
confidence: 99%