2019
DOI: 10.1016/j.ipm.2019.102084
|View full text |Cite
|
Sign up to set email alerts
|

TDAM: A topic-dependent attention model for sentiment analysis

Abstract: We propose a topic-dependent attention model for sentiment classification and topic extraction. Our model assumes that a global topic embedding is shared across documents and employs an attention mechanism to derive local topic embedding for words and sentences. These are subsequently incorporated in a modified Gated Recurrent Unit (GRU) for sentiment classification and extraction of topics bearing different sentiment polarities. Those topics emerge from the words' local topic embeddings learned by the interna… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 50 publications
(28 citation statements)
references
References 23 publications
0
28
0
Order By: Relevance
“…Disentangled representation learning Deep generative models learn the hidden semantics of text, of which many attempt to capture the independent latent factor to steer the generation of text in the context of NLP (Hu et al, 2017;Li et al, 2018a;Pergola et al, 2019;John et al, 2019;. The majority of the aforementioned work employs VAE to learn controllable factors, leading to the abundance of VAE-based models in disentangled representation learning (Higgins et al, 2017;Burgess et al, 2018;Chen et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Disentangled representation learning Deep generative models learn the hidden semantics of text, of which many attempt to capture the independent latent factor to steer the generation of text in the context of NLP (Hu et al, 2017;Li et al, 2018a;Pergola et al, 2019;John et al, 2019;. The majority of the aforementioned work employs VAE to learn controllable factors, leading to the abundance of VAE-based models in disentangled representation learning (Higgins et al, 2017;Burgess et al, 2018;Chen et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, there are many previous supervised topic models that directly incorporate the supervision information into the unsupervised versions, including both works on single-label learning [21,14,38,39,35,36,29,24,26] and multi-label classification [32,27,28,30,13,17,25,37,1]. These models have empirically achieved very competitive classification performance, however, they require labeled documents as inputs.…”
Section: Related Workmentioning
confidence: 99%
“…Hierarchical topic models (Viegas et al, 2020) utilize relationships among the latent topics. Supervised topic models have been explored previously where the topic model is trained through human feedback (Kumar et al, 2019) or with a task specific network simultaneously such that topic extraction is guided through task labels (Pergola et al, 2019;Wang and Yang, 2020). Card et al (2018) leverages document metadata but without metadata their method is same as ProdLDA which is our baseline.…”
Section: Introductionmentioning
confidence: 99%