2019
DOI: 10.1007/978-3-030-26142-9_11
|View full text |Cite
|
Sign up to set email alerts
|

Deep Cascade of Extra Trees

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…The simplification can also occur once the DF architecture is trained, as in [11] selecting in each forest the most important paths to reduce the network time-and memory-complexity. Approaches to increase the approximation capacity of DF have also been proposed by adjoining weights to trees or to forests in each layer [20,21], replacing the forest by more complex estimators (cascade of ExtraTrees) [2], or by combining several of the previous modifications notably incorporating data preprocessing [9]. Overall, the related works on DF exclusively represent algorithmic contributions without a formal understanding of the driving mechanisms at work inside the forest cascade.…”
Section: Introductionmentioning
confidence: 99%
“…The simplification can also occur once the DF architecture is trained, as in [11] selecting in each forest the most important paths to reduce the network time-and memory-complexity. Approaches to increase the approximation capacity of DF have also been proposed by adjoining weights to trees or to forests in each layer [20,21], replacing the forest by more complex estimators (cascade of ExtraTrees) [2], or by combining several of the previous modifications notably incorporating data preprocessing [9]. Overall, the related works on DF exclusively represent algorithmic contributions without a formal understanding of the driving mechanisms at work inside the forest cascade.…”
Section: Introductionmentioning
confidence: 99%
“…The analysis process involved the selection of features that were theoretically linked to gas-sensitive properties, and the data set was divided into a training set (70%) and a test set (30%), both sets underwent normalization. Ten algorithms were adopted to identify the most suitable machine learning model, which are Multilayer Perceptron (MLP), Logistic Regression, Random Forest, K-Nearest Neighbors (KNNs), Support Vector Machine (SVM), Decision Tree, Extra Trees, AdaBoost, Bagging, and Voting . To mitigate overfitting in the context of the logistic regression algorithm, we employed L2 regularization as a method to reduce the risk of overfitting.…”
mentioning
confidence: 99%