2019
DOI: 10.48550/arxiv.1907.05545
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Dynamic Embedded Topic Model

Abstract: Topic modeling analyzes documents to learn meaningful patterns of words. Dynamic topic models capture how these patterns vary over time for a set of documents that were collected over a large time span. We develop the dynamic embedded topic model (D-ETM), a generative model of documents that combines dynamic latent Dirichlet allocation (D-LDA) and word embeddings. The D-ETM models each word with a categorical distribution whose parameter is given by the inner product between the word embedding and an embedding… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(27 citation statements)
references
References 25 publications
0
27
0
Order By: Relevance
“…Finally, GRADE is also related to dynamic topic modelling (Dieng et al, 2019;Blei and Lafferty, 2006) as both can also be viewed as state-space models. The difference is that in GRADE we are dealing with multinomial distributions over the nodes and communities instead of topics and words.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Finally, GRADE is also related to dynamic topic modelling (Dieng et al, 2019;Blei and Lafferty, 2006) as both can also be viewed as state-space models. The difference is that in GRADE we are dealing with multinomial distributions over the nodes and communities instead of topics and words.…”
Section: Related Workmentioning
confidence: 99%
“…The difference is that in GRADE we are dealing with multinomial distributions over the nodes and communities instead of topics and words. Moreover, some works like (Bamler and Mandt, 2017;Rudolph and Blei, 2018) have focused on the shift of word meaning over time, and others such as (Dieng et al, 2019) model the evolution of documents. In contrast, GRADE assumes both nodes and communities undergo temporal semantic shift.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, a lot of work is harnessing topic modeling (Blei et al 2003) along with word vectors to learn better word and sentence representations, e.g., LDA (Chen and Liu 2014), weight-BoC (Kim, Kim, and Cho 2017), TWE , NTSG (Liu, Qiu, and Huang 2015), WTM (Fu et al 2016), w2v-LDA (Nguyen et al 2015, TV+MeanWV (Li et al 2016a), LTSG (Law et al 2017), Gaussian-LDA (Das, Zaheer, and Dyer 2015), Topic2Vec (Niu et al 2015), TM (Dieng, Ruiz, and Blei 2019b), LDA2vec (Moody 2016), D-ETM (Dieng, Ruiz, and Blei 2019a) and MvTM . (Kiros et al 2015) propose skip-thought document embedding vectors which transformed the idea of abstracting the distributional hypothesis from word to sentence level.…”
Section: Related Workmentioning
confidence: 99%
“…Then we develop the Sawtooth Connection technique to capture the dependencies between the topics at different layers, where the factor loading at layer l is the factor score at layer l − 1, which enables the hierarchical topics to be coupled together across all layers. Our work is inspired by both GBN (Zhou et al, 2015), a multi-stochastic-layer hierarchical topic model, and the embedding topic models (Dieng et al, 2019;2020), which represent the words and single layer topic as embedding vectors. The proposed Sawtooth Connector is a novel method that combines the advantages of both models for hierarchical topic modeling.…”
Section: Introductionmentioning
confidence: 99%