2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS) 2022
DOI: 10.1109/icscds53736.2022.9760962
|View full text |Cite
|
Sign up to set email alerts
|

Diagnosis of Parkinson's Disease using Principal Component Analysis and Machine Learning algorithms with Vocal Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 8 publications
2
2
0
Order By: Relevance
“…Consistent with the literature (16, 32,33,42), our ndings a rm the superior performance of SVM models, aligning with the trends in PD diagnosis using voice analysis. Only a few studies showed different results (43) (44). Our model's accuracy and F1 scores for SVM, GB, and RF are comparable or superior to those reported in more extensive studies (38,45).…”
Section: Discussionsupporting
confidence: 63%
“…Consistent with the literature (16, 32,33,42), our ndings a rm the superior performance of SVM models, aligning with the trends in PD diagnosis using voice analysis. Only a few studies showed different results (43) (44). Our model's accuracy and F1 scores for SVM, GB, and RF are comparable or superior to those reported in more extensive studies (38,45).…”
Section: Discussionsupporting
confidence: 63%
“…To compact and eliminate irrelevant features, a dimensionality reduction technique called principal component analysis (PCA) was applied (Jollife and Cadima 2016). An algorithm for diagnosing PD based on PCA and machine learning algorithms using vocal features reported 95% of accuracy (Rao et al 2022). Further, the synthetic minority oversampling technique (SMOTE) (Alberto Fernández et al 2018) resampling was employed to balance the class distribution and broaden the sample range.…”
Section: Introductionmentioning
confidence: 99%
“…As for the parameters tested, for the DNN, we explored the number of neurons in the rst dense layer (16,32,64), learning rates (0.001, 0.01, 0.05), dropout rates (0.1, 0.3, 0.5), batch sizes (8,16,32), epochs (30,70), activation functions (sigmoid, tanh, relu, and swish) and optimizers (Adaptive Moment Estimation (Adam), Stochastic Gradient Descent and Adamax, a variant of Adam based on the in nity norm). For the GB classi er, we adjusted several hyperparameters: learning rate (0.1, 0.2, 0.3), max depth (5), max features (sqrt, log2), min samples leaf (10, 20), min samples split (20,30), n_estimators (200, 300), and subsample (0.8, 0.9).…”
Section: Machine Learning Analysismentioning
confidence: 99%