2012
DOI: 10.5120/9675-4102
|View full text |Cite
|
Sign up to set email alerts
|

Classification of Normal and Pathological Voice using GA and SVM

Abstract: The analysis of pathological voice is a challenging and an important area of research in speech processing. Acoustic characteristics of voice are used mainly to discriminate normal voices from pathological voices. This study explores methods to find the ability of acoustic parameters in discrimination of normal voices from pathological voices. An attempt is made to analyze and to classify pathological voice from normal voice in children. The classification of pathological voice from normal voice is implemented… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
7
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 10 publications
1
7
0
Order By: Relevance
“…Due to its good generalization power, SVM is considered a state-of-the-art classifier (Alghowinem et al 2013a), showing great performance in the identification of speech pathologies (Arjmandi and Pooyan 2012;Sellam and Jagadeesan 2014;Wang et al 2011); along with GMM, SVM is the most widely used classification technique using voice parameters (Jiang et al 2017). The greatest performances from different SVM kernels in this dataset support findings from previous studies in the literature.…”
Section: Introductionsupporting
confidence: 77%
“…Due to its good generalization power, SVM is considered a state-of-the-art classifier (Alghowinem et al 2013a), showing great performance in the identification of speech pathologies (Arjmandi and Pooyan 2012;Sellam and Jagadeesan 2014;Wang et al 2011); along with GMM, SVM is the most widely used classification technique using voice parameters (Jiang et al 2017). The greatest performances from different SVM kernels in this dataset support findings from previous studies in the literature.…”
Section: Introductionsupporting
confidence: 77%
“…The process of feature selection is to select the best features that describe the speaker when dealing with hundreds of features that lead to increasing the workload of recognition. Selecting the best features set leads to reducing the classifier training time and as well as increasing the classification accuracy [23]. The accuracy of classification using principal component analysis in addition to discrete wavelet + curvelet is shown in Tables III and IV. From Tables III and IV, it is inferred that the accuracy was impacted positively and it is clear that reducing the features by using PCA did not affect the classification accuracy where the classification accuracy of level one and level two was increased to achieve the best classification and the accuracy of level three still 100%.…”
Section: Assessment Of Resultsmentioning
confidence: 99%
“…The SVM with radial basis function as a kernel was utilized because it has less restriction on the data volume and number of features, more general than the linear kernels and it produces better accuracy compared to other kernel functions [24,25]. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data point of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier [26]. The best combination of two SVM parameters; cost (c) and gamma (γ) were obtained using LIBSVM selection tool which has been implemented by Chang and Lin [27].…”
Section: Classificationmentioning
confidence: 99%