2021
DOI: 10.1049/cit2.12006
|View full text |Cite
|
Sign up to set email alerts
|

TWE‐WSD: An effective topical word embedding based word sense disambiguation

Abstract: Word embedding has been widely used in word sense disambiguation (WSD) and many other tasks in recent years for it can well represent the semantics of words. However, the existing word embedding methods mostly represent each word as a single vector, without considering the homonymy and polysemy of the word; thus, their performances are limited. In order to address this problem, an effective topical word embedding (TWE)-based WSD method, named TWE-WSD, is proposed, which integrates Latent Dirichlet Allocation (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…The architecture of VGG16 is shown in Figure 2. [36] by integrating VGG16 and Latent Dirichlet Allocation (LDA) [37].…”
Section: Vgg16mentioning
confidence: 99%
See 1 more Smart Citation
“…The architecture of VGG16 is shown in Figure 2. [36] by integrating VGG16 and Latent Dirichlet Allocation (LDA) [37].…”
Section: Vgg16mentioning
confidence: 99%
“…The classifier of VGG16 consists of two fully connected layers with 4096 nodes and one fully connected layer with 1000 nodes, which are finally classified by the softmax function. The authors choose VGG16 as the basic model to research flower classification because it has achieved successful results in the ImageNet challenge and has been widely used in flower classification and many other fields, e.g., A-LDCNN proposed in[36] by integrating VGG16 and Latent Dirichlet Allocation (LDA)[37].…”
mentioning
confidence: 99%
“…Currently, the most popular method to solve this task is word embedding [5][6][7], which yields low-dimensional word vectors from corpora to calculate word similarity. It also has become a relevant topic in recent years and plays a really important role in NLP downstream tasks, such as Word sense disambiguation [8,9], machine translation [10,11], text summarization [12,13], context identification system [14].…”
Section: Introductionmentioning
confidence: 99%
“…An adaptive cross-contextual word embedding process is then defined to learn local word embedding for each polysemous word in different contexts. Jia et al[98] proposed an effective topical word embedding (TWE)-based WSD method to generate topical word vectors for each word under each topic. The authors used LDA to extract high-quality topic models, and then generate contextual vectors generated for an ambiguous word to exploit the context.…”
mentioning
confidence: 99%