2020
DOI: 10.1177/2158244020983316
|View full text |Cite
|
Sign up to set email alerts
|

Using XGBoost and Skip-Gram Model to Predict Online Review Popularity

Abstract: Review popularity is similar to awareness and information accessibility components: Both have a profound effect on customer purchase decisions. Therefore, this study proposes a new method for predicting online review popularity that combines the extreme gradient boosting tree algorithm (XGBoost), to extract key features on the bases of ranking scores and the skip-gram model, which can subsequently identify semantic words according to key textual terms. Findings revealed that written reviews had higher review p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 82 publications
(125 reference statements)
0
3
0
Order By: Relevance
“…Due to the fast learning and efficient memory, this algorithm was chosen. This algorithm combines several weak predictors to create a robust classifier ( Swamynathan, 2017 ; Nguyen et al, 2020 ).…”
Section: Methodsmentioning
confidence: 99%
“…Due to the fast learning and efficient memory, this algorithm was chosen. This algorithm combines several weak predictors to create a robust classifier ( Swamynathan, 2017 ; Nguyen et al, 2020 ).…”
Section: Methodsmentioning
confidence: 99%
“…XB, which stands for extreme gradient boosting, is a kind of boosting algorithm that is recognized mostly for providing parallel tree boosting that is well used for solving data science problems accurately and efficiently in terms of speed as performance 55 . This algorithm is also widely used to predict the online review polarity 56 based on customer purchase decisions where the key features are extracted from the data based on ranking scores. The precision, recall, and F1‐score for the XB model on the training results were 100% for all, respectively, while the precision, recall, and F1‐scores for the testing data were 83.4%, 83.1%, and 83.1%, respectively (see Table 5).…”
Section: Performance Analysis Of ML Modelsmentioning
confidence: 99%
“…In addition, the probability that a context word 𝑐 remains near the target word 𝑡 can be calculated as follows [9]:…”
Section: Skip N-gram Modelmentioning
confidence: 99%