Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2013
DOI: 10.1145/2487575.2487698
|View full text |Cite
|
Sign up to set email alerts
|

Understanding evolution of research themes

Abstract: Understanding how research themes evolve over time in a research community is useful in many ways (e.g., revealing important milestones and discovering emerging major research trends). In this paper, we propose a novel way of analyzing literature citation to explore the research topics and the theme evolution by modeling article citation relations with a probabilistic generative model. The key idea is to represent a research paper by a "bag of citations" and model such a "citation document" with a probabilisti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 42 publications
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…On the one hand, for the estimation of explicit probability distributions with parameters θ, the model provides a likelihood of L = ∏ m i=1 p model x (i) , θ to the m training samples, and the maximum likelihood principle chooses the parameter θ * = arg max θ ∏ m i=1 p model x (i) , θ that maximizes that probability; on the other hand, for the estimation of implicit probability distributions, the maximum likelihood can be approximated as the solution of the parameter θ * = arg max θ D KL (p data (x) p model (x; θ)) that minimizes the Kullback-Leibler divergence [32] between the model distribution and the data distribution. Therefore, likelihoodbased generative models can be divided into implicit models and explicit models [33].…”
Section: Probability Density Estimation Methodsmentioning
confidence: 99%
“…On the one hand, for the estimation of explicit probability distributions with parameters θ, the model provides a likelihood of L = ∏ m i=1 p model x (i) , θ to the m training samples, and the maximum likelihood principle chooses the parameter θ * = arg max θ ∏ m i=1 p model x (i) , θ that maximizes that probability; on the other hand, for the estimation of implicit probability distributions, the maximum likelihood can be approximated as the solution of the parameter θ * = arg max θ D KL (p data (x) p model (x; θ)) that minimizes the Kullback-Leibler divergence [32] between the model distribution and the data distribution. Therefore, likelihoodbased generative models can be divided into implicit models and explicit models [33].…”
Section: Probability Density Estimation Methodsmentioning
confidence: 99%