2020
DOI: 10.1007/978-3-030-62419-4_30
|View full text |Cite
|
Sign up to set email alerts
|

From Syntactic Structure to Semantic Relationship: Hypernym Extraction from Definitions by Recurrent Neural Networks Using the Part of Speech Information

Abstract: The hyponym-hypernym relation is an essential element in the semantic network. Identifying the hypernym from a definition is an important task in natural language processing and semantic analysis. While a public dictionary such as WordNet works for common words, its application in domain-specific scenarios is limited. Existing tools for hypernym extraction either rely on specific semantic patterns or focus on the word representation, which all demonstrate certain limitations. Here we propose a method by combin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…The sampling method is further combined with a specifically optimized loss function, forming a new embedding method: CoarSAS2hvec. Future directions include using the method to analyze different systems characterized by HIN, such as the scientific disciplines, the individual careers of scientists, the topic evolution in online forums, and more [50][51][52][53][54]. It is also interesting to explore whether the information entropy can be universally applied to predict the performance of an algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…The sampling method is further combined with a specifically optimized loss function, forming a new embedding method: CoarSAS2hvec. Future directions include using the method to analyze different systems characterized by HIN, such as the scientific disciplines, the individual careers of scientists, the topic evolution in online forums, and more [50][51][52][53][54]. It is also interesting to explore whether the information entropy can be universally applied to predict the performance of an algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…Future directions include using the method to analyze different systems characterized by HIN, such as the scientific disciplines, the individual careers of scientists, the topic evolution in online forums and more [15,22,28]. It is also interesting to explore whether the information entropy can be universally applied to predict the performance of an algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…The performance without LSTM drops by 10% or even more (Table 2), demonstrating the important role LSTM plays in capturing the temporal evolution. Indeed, the LSTM and other related variants have been intensively applied in tasks that require temporal feature learning [43,44,11,20,12], which self proves its high efficiency. Note that the MSLE by CasSeqGCN noLSTM is higher than that of CasSeqGCN Mean, indicating that CasSeqGCN benefits more from the temporal learning part than from the aggregation part.…”
Section: Ablation Study the Advanced Performance Of Casseqgcn Prompts...mentioning
confidence: 99%