Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.299
|View full text |Cite
|
Sign up to set email alerts
|

TAN-NTM: Topic Attention Networks for Neural Topic Modeling

Abstract: Topic models have been widely used to learn text representations and gain insight into document corpora. To perform topic discovery, most existing neural models either take document bag-of-words (BoW) or sequence of tokens as input followed by variational inference and BoW reconstruction to learn topic-word distribution. However, leveraging topic-word distribution for learning better features during document encoding has not been explored much. To this end, we develop a framework TAN-NTM, which processes docum… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 47 publications
0
6
0
Order By: Relevance
“…Typically, a statistical topic model [15,4] captures topics in the form of latent variables with probability distributions over the entire vocabulary and performs approximate inference over document-topic and topic-word distributions through Variational Bayes [3]. However, such a learning paradigm requires an expensive iterative inference step performed on every document in a corpus [30]. The efficiency is boosted after the introduction of VAE-based (Variational AutoEncoder) neural topic model [2,49] because variational inference can be performed through a single forward pass [19].…”
Section: Related Workmentioning
confidence: 99%
“…Typically, a statistical topic model [15,4] captures topics in the form of latent variables with probability distributions over the entire vocabulary and performs approximate inference over document-topic and topic-word distributions through Variational Bayes [3]. However, such a learning paradigm requires an expensive iterative inference step performed on every document in a corpus [30]. The efficiency is boosted after the introduction of VAE-based (Variational AutoEncoder) neural topic model [2,49] because variational inference can be performed through a single forward pass [19].…”
Section: Related Workmentioning
confidence: 99%
“…We adopt a similar approach to (Lu et al, 2018) by modeling attention using topics. However, unlike the topic attention model (TAN), which uses a bag-of-words (BOW) model based on variational inference to align the topic space and word space with extracting meaningful topics (Panwar et al, 2021), we assume that these multiple attention heads represent multiple topics in terms of their semantics.…”
Section: Role Of Attention Mechanismmentioning
confidence: 99%
“…At the similar period of time, [12] independently proposes an SNTM whose generative process is similar to [66], with an additional variable modelling stop words and several variants in the inference process. Recently, [44] proposes to use an LSTM with attentions as the encoder taking s as input, where the attention incorporates topical information with a context vector that is constructed by topic embeddings and document embeddings. [47] introduces an SNTM that is related to [12], where instead of marginalising out the discrete topic assignments, the paper proposes to generate them from an RNN model.…”
Section: Sequential Ntmsmentioning
confidence: 99%