2005
DOI: 10.1007/11562214_82
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Interpretation of Noun Compounds Using WordNet Similarity

Abstract: The paper introduces a method for interpreting novel noun compounds with semantic relations. The method is built around word similarity with pretagged noun compounds, based on WordNet::Similarity. Over 1,088 training instances and 1,081 test instances from the Wall Street Journal in the Penn Treebank, the proposed method was able to correctly classify 53.3% of the test noun compounds. We also investigated the relative contribution of the modifier and the head noun in noun compounds of different semantic types.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
56
0
2

Year Published

2010
2010
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 62 publications
(61 citation statements)
references
References 11 publications
3
56
0
2
Order By: Relevance
“…In addition, the performances of various computational models have argued against the idea that modifiers should have a greater influence on relation selection. For example, studies by Kim and Baldwin (2005) and Rosario and Hearst (2004) both found that the head noun is more reliable than the modifier in predicting relation use.…”
Section: Modifier Primacymentioning
confidence: 99%
“…In addition, the performances of various computational models have argued against the idea that modifiers should have a greater influence on relation selection. For example, studies by Kim and Baldwin (2005) and Rosario and Hearst (2004) both found that the head noun is more reliable than the modifier in predicting relation use.…”
Section: Modifier Primacymentioning
confidence: 99%
“…Yet others (eg. [22,14]) are somewhat similar to [28]. Most of the research to date has been domain independent, done on generic corpus such as Penn Tree Bank, British National Corpus or the web.…”
Section: Related Workmentioning
confidence: 85%
“…The κ index for the overall annotation tasks was computed to be Study Agreement Index No. of Relations [11] 0.57 -0.67 κ 43 [22] 0.61 κ 22 [23] 0.68 κ 6 [17] 52.31 % 20 [24] 0.58 κ 21 …”
Section: Resultsmentioning
confidence: 99%
“…Yet others (eg. [16], [17]) are somewhat similar to [18]. Most of the research to date has been domain independent done on generic corpus such as Penn Tree Bank, British National Corpus or the web.…”
Section: Related Workmentioning
confidence: 88%