2020
DOI: 10.1080/1062936x.2020.1773534
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of ensemble methods on QSAR Study of NS3 inhibitor activity as anti-dengue agent

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(15 citation statements)
references
References 26 publications
0
15
0
Order By: Relevance
“…This algorithm can be used for categorical variables referred to in classification and continuous response referred to as regression. The samples data and randomly construct decision trees to avoid overfit from train data [12]. Random Forest's additional features include measuring variable importance, missing value imputation, outlier detection, etc [13].…”
Section: Prediction Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…This algorithm can be used for categorical variables referred to in classification and continuous response referred to as regression. The samples data and randomly construct decision trees to avoid overfit from train data [12]. Random Forest's additional features include measuring variable importance, missing value imputation, outlier detection, etc [13].…”
Section: Prediction Modelmentioning
confidence: 99%
“…This algorithm's main idea is to use a weighted version from train data instead of random sampling [16]. It can adaptively adjust the weight from the classifier group and give a better result because of the diversity of each group [12]. The difference between Random Forest with XGBoost and AdaBoost is the tree in Random Forest work sequentially [13].…”
Section: Prediction Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…RF is one of the popular machine learning methods that can handle high-dimensional data (Nembrini et al, 2018). RF builds a random decision tree using the bootstrap/bagging concept (SLRF, 2022;Ram et al, 2017;Kurniawan et al, 2020). RF performance is usually not affected by hyperparameter tuning (Logotheti et al, 2016).…”
Section: Prediction Modelmentioning
confidence: 99%
“…AdaBoost works by adjusting the weights for each cycle of the weak classifier group. AdaBoost can give better results because the diversity among classifiers is weak based on the performance of each classifier (Kurniawan et al, 2020).…”
Section: Prediction Modelmentioning
confidence: 99%