2018 IEEE International Conference on Data Mining (ICDM) 2018
DOI: 10.1109/icdm.2018.00043
|View full text |Cite
|
Sign up to set email alerts
|

A Semi-Supervised and Inductive Embedding Model for Churn Prediction of Large-Scale Mobile Games

Abstract: Mobile gaming has emerged as a promising market with billion-dollar revenues. A variety of mobile game platforms and services have been developed around the world. One critical challenge for these platforms and services is to understand user churn behavior in mobile games. Accurate churn prediction will benefit many stakeholders such as game developers, advertisers, and platform operators. In this paper, we present the first largescale churn prediction solution for mobile games. In view of the common limitatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 36 publications
(18 citation statements)
references
References 17 publications
0
18
0
Order By: Relevance
“…This property is referred to as temporal smoothness. This property has also been observed and shown to be helpful to improve representation performance in [5]. Suppose that we are at time t + 1.…”
Section: Dynamic Graph Representation Learningmentioning
confidence: 73%
See 2 more Smart Citations
“…This property is referred to as temporal smoothness. This property has also been observed and shown to be helpful to improve representation performance in [5]. Suppose that we are at time t + 1.…”
Section: Dynamic Graph Representation Learningmentioning
confidence: 73%
“…There are several studies on learning representations in dynamic graphs, but very few in streaming graphs, which require higher efficiency and lower uncertainty. Liu et al [5] and Hamilton et al [15] propose to leverage node feature information to learn an embedding function that generalizes to unseen nodes. These two methods rely on prior knowledge of new vertices' attributes and are difficult to work with only topology information.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We tune the hyperparameters, including learning rate, batch size, regularization terms, number of layers and number of neurons per layer, based on the model performance on the test datasets. To determine the parameters for micro-level churn prediction, we follow what we did in our previous work [28]. A grid search on these parameters is performed and the combination yielding the best performance is chosen.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…On top of what is illustrated so far, we also individuated a series of limitations regarding the employed data-sources. The number of the considered users rarely goes beyond 10 4 [24] and when it comes to churn estimation the class distribution is usually greatly imbalanced, both of these factors can pose limitations on the interpretation and generalization of results.…”
Section: State Of the Art And Current Contribution A Churn And mentioning
confidence: 99%