Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.2
|View full text |Cite
|
Sign up to set email alerts
|

Extracting Topics with Simultaneous Word Co-occurrence and Semantic Correlation Graphs: Neural Topic Modeling for Short Texts

Abstract: Short text nowadays has become a more fashionable form of text data, e.g., Twitter posts, news titles, and product reviews. Extracting semantic topics from short texts plays a significant role in a wide spectrum of NLP applications, and neural topic modeling is now a major tool to achieve it. Motivated by learning more coherent and semantic topics, in this paper we develop a novel neural topic model named Dual Word Graph Topic Model (DWGTM), which extracts topics from simultaneous word co-occurrence and semant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…They also propose a negative sampling decoder to avoid repetitive topics besides the negative log-likelihood. To address the data sparsity issue, Wang et al (2021b) use word co-occurrence and semantic correlation graphs to enrich the learning signals of short texts. Zhao et al (2021c) propose to incorporate entity vector representations into a NTM for short texts.…”
Section: Short Text Ntmsmentioning
confidence: 99%
“…They also propose a negative sampling decoder to avoid repetitive topics besides the negative log-likelihood. To address the data sparsity issue, Wang et al (2021b) use word co-occurrence and semantic correlation graphs to enrich the learning signals of short texts. Zhao et al (2021c) propose to incorporate entity vector representations into a NTM for short texts.…”
Section: Short Text Ntmsmentioning
confidence: 99%
“…Xie et al (2021) suggested a Graph Topic Neural Network (GTNN) model for extracting the semantics of latent topics for intelligible document representation learning, considering the word to word, document to word, and document to document relationships in the graph. To extract topics from concurrent word co-occurrence and semantic similarity graphs, Wang et al (2021b) proposed the Dual Word Graph Topic Model (DWGTM). Where the global word cooccurrence graph is used for training DWGTM to learn word features.…”
Section: 343mentioning
confidence: 99%