2023
DOI: 10.1016/j.simpa.2023.100504
|View full text |Cite
|
Sign up to set email alerts
|

PD-ADSV: An automated diagnosing system using voice signals and hard voting ensemble method for Parkinson’s disease

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…Although both LightGBM and XGBoost [38] are capable of doing parallel arithmetic, LightGBM is superior to XGBoost due to its faster training speed and lower memory occupation, both of which help lower the communication cost of parallel learning [66]. The gradient-based one-side sampling (GOSS) decision tree algorithm, exclusive feature bundling (EFB), depth-limited histogram, and leaf-wise growth approach are the significant features of LightGBM [67]. GOSS can strike a compromise between the number of samples and the precision of the LightGBM decision tree.…”
Section: Lightgbmmentioning
confidence: 99%
“…Although both LightGBM and XGBoost [38] are capable of doing parallel arithmetic, LightGBM is superior to XGBoost due to its faster training speed and lower memory occupation, both of which help lower the communication cost of parallel learning [66]. The gradient-based one-side sampling (GOSS) decision tree algorithm, exclusive feature bundling (EFB), depth-limited histogram, and leaf-wise growth approach are the significant features of LightGBM [67]. GOSS can strike a compromise between the number of samples and the precision of the LightGBM decision tree.…”
Section: Lightgbmmentioning
confidence: 99%
“…First, it can offer high performance for both regression and classification problems utilizing Taylor series expansions of the objective function, efficiently constructing boosted trees and working in parallel. Second, a multi-threading parallel computing model can also be automatically called in XGBoost, which is several times faster than the traditional ensemble learning models [61]. Finally, the regularisation term used in its objective function increases the generalisation ability, making it less likely that the model would be overfitted by data [62][63][64][65][66].…”
Section: Xgboostmentioning
confidence: 99%