2020
DOI: 10.1016/j.jretconser.2019.101941
|View full text |Cite
|
Sign up to set email alerts
|

Business competitive analysis using promoted post detection on social media

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…The training set is randomly generated. RF is implemented to reduce the correlation between the random distributions of the input set and improve the bagging [91], [92]. Some techniques can improve the RF algorithm; for instance, nonparametric RF is more robust to outliers compared to other bagging or boosting algorithms [92].…”
Section: ) Ensemble Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The training set is randomly generated. RF is implemented to reduce the correlation between the random distributions of the input set and improve the bagging [91], [92]. Some techniques can improve the RF algorithm; for instance, nonparametric RF is more robust to outliers compared to other bagging or boosting algorithms [92].…”
Section: ) Ensemble Methodsmentioning
confidence: 99%
“…XGB is an efficient model for making decisions from a large dataset, and it is obtained by recursively partitioning the data space and fitting the prediction model at each partition. The individual decisions are entirely inaccurate but are better than those generated randomly [91]. RF combines several decision trees at the end of the process, whereas gradient boosting combines decision trees at the beginning of the process [93].…”
Section: ) Ensemble Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Stacking, on the other hand, is relatively complex. It first uses the basic learner to fit the data in the training set to generate the underlying model and then uses the prediction value generated by the basic learner in the first layer as the input of the second layer [26]. Unlike voting fusion, it can be regarded as heterogeneous integration, and it is an advantage set at the algorithm level.…”
Section: Voting Fusion Includes Mean Fusion Which Uses Multiple Group...mentioning
confidence: 99%
“…Several teams participated in the challenge and out of all One research work presented in this challenge was by Wang and Zhang (2017) on collected Flicker Dataset. In this research work, gradient boosting regression tree (GBRT) used ridge regression for prediction score generation by modelling the linear regression (Gerasimovic & Bugaric, 2018; Arora et al, 2020) in between features. Overall 10 features—user id (distinct key), post‐date, number of comments, hashtags, length of title, length of description, tag count, average view, group count and average member count are used to predict population using ridge regression method (Wang & Zhang, 2017).…”
Section: Related Research and Supporting Evidencementioning
confidence: 99%