2015 IEEE International Conference on Data Mining Workshop (ICDMW) 2015
DOI: 10.1109/icdmw.2015.26
|View full text |Cite
|
Sign up to set email alerts
|

Deep Convolutional Neural Network and Multi-view Stacking Ensemble in Ali Mobile Recommendation Algorithm Competition: The Solution to the Winning of Ali Mobile Recommendation Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 18 publications
0
14
0
Order By: Relevance
“…Literature [9] also used ensemble learning model of GDBT-based learner, but different from fusion strategy in Literature [8], it used Blending fusion strategy with better effect than Bagging fusion strategy. Li et al adopted Stacking fusion strategy to construct an ensemble learning model of GDBT-based learner and achieved better effect than Bagging and Blending [7]. Literatures [7][8][9] demonstrated the feasibility of the improved fusion strategy to improve the performance of ensemble learning prediction model, thus providing a theoretical foundation for FCV-Stacking fusion strategy.…”
Section: Ensemble Learning Prediction Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…Literature [9] also used ensemble learning model of GDBT-based learner, but different from fusion strategy in Literature [8], it used Blending fusion strategy with better effect than Bagging fusion strategy. Li et al adopted Stacking fusion strategy to construct an ensemble learning model of GDBT-based learner and achieved better effect than Bagging and Blending [7]. Literatures [7][8][9] demonstrated the feasibility of the improved fusion strategy to improve the performance of ensemble learning prediction model, thus providing a theoretical foundation for FCV-Stacking fusion strategy.…”
Section: Ensemble Learning Prediction Modelmentioning
confidence: 99%
“…Li et al adopted Stacking fusion strategy to construct an ensemble learning model of GDBT-based learner and achieved better effect than Bagging and Blending [7]. Literatures [7][8][9] demonstrated the feasibility of the improved fusion strategy to improve the performance of ensemble learning prediction model, thus providing a theoretical foundation for FCV-Stacking fusion strategy. Zhou et al put forward a two-layer Multi-Model Stacking Ensemble (MMSE) Learning, where the first layer trained four ensemble algorithms-RF, AdaBoost, GDBT and XGBoost-as base learners, the second layer used XGBoost algorithm to combine the four base learners and output the final prediction result, and the result indicated that its performance was more outstanding than single ensemble algorithm-based prediction model [27].…”
Section: Ensemble Learning Prediction Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the above models have disadvantages of weak feature representation ability and not high accuracy when used to process user's historical behavioral data which are quite complicated. Therefore, CNN and RNN-represented deep learning-based prediction models for user purchasing behaviors have been put forward in succession [4,17] . Song et al used user's historical purchasing behavioral data, predicted buyer purchasing time based on MLP and RNN models, respectively, and the results showed that MLP achieved better effect than RNN [18] .…”
Section: Individual Learning Prediction Modelmentioning
confidence: 99%
“…In addition, CF algorithm excessively relies upon user ratings, so it cannot accurately predict user purchasing behaviors without user ratings or with user rating errors. Therefore, numerous researchers have proposed individual learning prediction models [3,4] such as logistic regression, support vector machine (SVM), multilayer perceptron (MLP) and neural network as well as ensemble learning prediction models [4][5][6] such as gradient boosting decision tree (GBDT) and XGBoost. In these models, the prediction of user purchasing behaviors is regarded as a binary classification problem in machine learning (positive example: purchase, negative example: not purchase).…”
Section: Introductionmentioning
confidence: 99%