“…Recently, a lot of work is harnessing topic modeling (Blei et al 2003) along with word vectors to learn better word and sentence representations, e.g., LDA (Chen and Liu 2014), weight-BoC (Kim, Kim, and Cho 2017), TWE , NTSG (Liu, Qiu, and Huang 2015), WTM (Fu et al 2016), w2v-LDA (Nguyen et al 2015, TV+MeanWV (Li et al 2016a), LTSG (Law et al 2017), Gaussian-LDA (Das, Zaheer, and Dyer 2015), Topic2Vec (Niu et al 2015), TM (Dieng, Ruiz, and Blei 2019b), LDA2vec (Moody 2016), D-ETM (Dieng, Ruiz, and Blei 2019a) and MvTM . (Kiros et al 2015) propose skip-thought document embedding vectors which transformed the idea of abstracting the distributional hypothesis from word to sentence level.…”