2019
DOI: 10.1109/access.2019.2897910
|View full text |Cite
|
Sign up to set email alerts
|

Building a TIN-LDA Model for Mining Microblog Users’ Interest

Abstract: A latent Dirichlet allocation (LDA) model is a common method for mining the interest of microblog users. But the LDA model does not reflect the hierarchical and dynamic trend of microblog users' interest. As a result, this paper combines with the timeliness and interactivity of microblog, to judge the hierarchical orientation and dynamic interest trend orientation of users' interest. And based on the dynamic interest hierarchical orientation, the three-layers interest network (TIN-LDA) model is constructed to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(5 citation statements)
references
References 27 publications
0
5
0
Order By: Relevance
“…Qualitative analysis The qualitative analysis highlighted some observations related to existing TSTTM and ASTTM models. According to the existing TSTTM models, it can be observed that from Tables 2, 3, and 4, some of the works tried to increase the accuracy, such as (Wang et al 2012;Fang et al 2017;Kumar and Vardhan 2019;Sharath et al 2019;Valdez et al 2018;Belford et al 2016;Yan et al 2012;Muliawati and Murfi 2017;Capdevila et al 2017;and Li et al 2015), enhance the coherence such as (Zhao et al 2011;Zheng et al 2019;Han et al 2020;Kim et al 2020;Farahat et al 2015;Lacoste-Julien et al 2009;and He et al 2019b), alleviate the data sparsity problem as in (Ozyurt and Akcayol 2021;Akhtar et al 2019b;Iskandar 2017;and Pang et al 2019), extract the topics from the unlabeled data (Korshunova et al 2019;Ozyurt and Akcayol 2021). Other existing works resolved the lack of semantic information and local word co-occurrence (Akhtar et al 2019a;Chen and Kao 2017;Chen et al 2020b).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Qualitative analysis The qualitative analysis highlighted some observations related to existing TSTTM and ASTTM models. According to the existing TSTTM models, it can be observed that from Tables 2, 3, and 4, some of the works tried to increase the accuracy, such as (Wang et al 2012;Fang et al 2017;Kumar and Vardhan 2019;Sharath et al 2019;Valdez et al 2018;Belford et al 2016;Yan et al 2012;Muliawati and Murfi 2017;Capdevila et al 2017;and Li et al 2015), enhance the coherence such as (Zhao et al 2011;Zheng et al 2019;Han et al 2020;Kim et al 2020;Farahat et al 2015;Lacoste-Julien et al 2009;and He et al 2019b), alleviate the data sparsity problem as in (Ozyurt and Akcayol 2021;Akhtar et al 2019b;Iskandar 2017;and Pang et al 2019), extract the topics from the unlabeled data (Korshunova et al 2019;Ozyurt and Akcayol 2021). Other existing works resolved the lack of semantic information and local word co-occurrence (Akhtar et al 2019a;Chen and Kao 2017;Chen et al 2020b).…”
Section: Discussionmentioning
confidence: 99%
“…Tajbakhsh and Bagherzadeh (2019) designed the Semantic knowledge LDA with topic vector extracting tweet topics based on the co-occurrence of word for recommendation system. Zheng et al (2019) developed the Three-layer Interest Network LDA (TIN-LDA) model to discover topics from tweet data through interest attributes. The TIN-LDA significantly extracts the semantic correlation among the keywords and thus enhancing the coherence of the topics.…”
Section: Probabilistic Based Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…Perplexity is a common indicator for measuring the quality of language models (45). In the LDA model, the optimal review texts topic number K was determined by perplexity.…”
Section: Topic Numbermentioning
confidence: 99%