2021
DOI: 10.5829/ije.2021.34.01a.10
|View full text |Cite
|
Sign up to set email alerts
|

The Predictability of Tree-based Machine Learning Algorithms in the Big Data Context

Abstract: This research work is concerned with the predictability of ensemble and singular tree-based machine learning algorithms during the recession and prosperity of the two companies listed in the Tehran Stock Exchange in the context of big data. In this regard, the main issue is that economic managers and the academic community require predicting models with more accuracy and reduced execution time; moreover, the prediction of the companies recession in the stock market is highly significant. Machine learning algor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…The second most popular method for fusing classification results is the tree-based method, which involves first feeding forecasts of all base learners into a tree-based algorithm, then mapping each prediction to a neighborhood in the set of dependent variables, and then returning the mean neighborhood [99]. The most commonly used tree-based methods include gradient boosting [40], [75], [96] and random forest [28], [40], [75]. It is worth noting that Barak et al [27] used five tree-based methods for decision fusion: the BF tree, decision table, decision tree, decision tree naïve Bayes (DTNB), and the LAD tree, with the decision table performing the best.…”
Section: A Fusion Methods For Classificationmentioning
confidence: 99%
See 2 more Smart Citations
“…The second most popular method for fusing classification results is the tree-based method, which involves first feeding forecasts of all base learners into a tree-based algorithm, then mapping each prediction to a neighborhood in the set of dependent variables, and then returning the mean neighborhood [99]. The most commonly used tree-based methods include gradient boosting [40], [75], [96] and random forest [28], [40], [75]. It is worth noting that Barak et al [27] used five tree-based methods for decision fusion: the BF tree, decision table, decision tree, decision tree naïve Bayes (DTNB), and the LAD tree, with the decision table performing the best.…”
Section: A Fusion Methods For Classificationmentioning
confidence: 99%
“…Author(s) ANN [22], [33], [41], [42], [57], [58] Decision Tree [28], [40], [75], [96] SVM [43], [68], [93] LSTM [30], [88] PNN [34] ELM [48] DBN [84]…”
Section: Table III Homogeneous Base Learners For Classification Base ...mentioning
confidence: 99%
See 1 more Smart Citation
“…By increasing the complexity of the decision tree, a considerable increase takes place in the occurrence probability of overfitting. In addition, the training error decreases while the test error increases [2]. The reason for the occurrence of this phenomenon is the noise in the training dataset or inappropriate selection of training data.…”
Section: Decision Tree Pruningmentioning
confidence: 99%
“…Classification is one of the most widely used methods for data mining in order to provide a model for specifying the label of different samples based on their characteristics. In this regard, the decision tree is one of the most widely used algorithms which can produce understandable human descriptions of relationships in a dataset [2]. Further, this algorithm is one of the most widely used algorithms in pattern recognition domain due to its simplicity and interpretation, rule representation in a hierarchical format, cost and time of proper construction, the ability to work with continuous and discrete data, the need for prior knowledge and accurate presentation.…”
Section: Introductionmentioning
confidence: 99%