Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1351
|View full text |Cite
|
Sign up to set email alerts
|

Topic Memory Networks for Short Text Classification

Abstract: Many classification models work poorly on short texts due to data sparsity. To address this issue, we propose topic memory networks for short text classification with a novel topic memory mechanism to encode latent topic representations indicative of class labels. Different from most prior work that focuses on extending features with external knowledge or pre-trained topics, our model jointly explores topic inference and text classification with memory networks in an end-to-end manner. Experimental results on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
85
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
3

Relationship

4
6

Authors

Journals

citations
Cited by 124 publications
(87 citation statements)
references
References 35 publications
2
85
0
Order By: Relevance
“…To alleviate this issue, Bahdanau et al [28] proposed the attention mechanism to focus on relevant parts of the source sequence during decoding. We use the attention mechanism in our work because previous studies [35]- [37] prove that attention-based models can better capture the key information (e.g., topical or emotional tokens) in the source sequence. Fig.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…To alleviate this issue, Bahdanau et al [28] proposed the attention mechanism to focus on relevant parts of the source sequence during decoding. We use the attention mechanism in our work because previous studies [35]- [37] prove that attention-based models can better capture the key information (e.g., topical or emotional tokens) in the source sequence. Fig.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…We implement our model based on the pytorch framework in Paszke et al (2017). For NTM, we implement it following the design 10 in Zeng et al (2018) and set topic number K to 50. The KG model is set up mostly based on Meng et al (2017).…”
Section: Jointly Learning Topics and Keyphrasesmentioning
confidence: 99%
“…Recently, several works are proposed to utilize neural networks for short text classification. Zeng et al proposed a novel topic memory network to encode category relevant representation (Zeng et al, 2018). The topic inference and document classification are jointly performed in their model.…”
Section: Related Workmentioning
confidence: 99%