Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-2967
|View full text |Cite
|
Sign up to set email alerts
|

Early Identification of Speech Changes Due to Amyotrophic Lateral Sclerosis Using Machine Classification

Abstract: We used a machine learning (ML) approach to detect bulbar amyotrophic lateral sclerosis (ALS) prior to the onset of overt speech symptoms. The dataset included speech samples from 123 participants who were stratified by sex and into three groups: healthy controls, ALS symptomatic, and ALS presymptomatic. We compared models trained on three group pairs (symptomatic-control, presymptomatic-control, and all ALS-control participants). Using acoustic features obtained with the OpenSMILE ComParE13 configuration, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 27 publications
(32 reference statements)
0
4
0
Order By: Relevance
“…An et al [ 23 ] used convolutional neuronal networks (CNNs) to compare the intelligible speech produced by patients with ALS to that of healthy individuals. Gutz et al [ 24 ] merged SVM and feature filtering techniques (SelectKBest). In addition, Vashkevich et al [ 25 ] used linear discriminant analysis (LDA) to verify the suitability of the sustain vowel phonation test for automatic detection of patients with ALS.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…An et al [ 23 ] used convolutional neuronal networks (CNNs) to compare the intelligible speech produced by patients with ALS to that of healthy individuals. Gutz et al [ 24 ] merged SVM and feature filtering techniques (SelectKBest). In addition, Vashkevich et al [ 25 ] used linear discriminant analysis (LDA) to verify the suitability of the sustain vowel phonation test for automatic detection of patients with ALS.…”
Section: Introductionmentioning
confidence: 99%
“…Once the features were obtained, we used various classification algorithms to perform predictions based on supervised classification. In addition to traditional SVMs [ 9 , 16 , 21 , 22 , 24 ], NNs [ 9 , 16 , 23 ], and LDA [ 25 ], we used logistic regression (LR), which is one of the most frequently used models for classification purposes [ 29 , 30 ]; random forest (RF) [ 31 ], which is an ensemble method in machine learning that involves the construction of multiple tree predictors that are classic predictive analytic algorithms [ 22 ]; and naïve Bayes (NaB), which is still a relevant topic [ 32 ] and is based on applying Bayes’ theorem.…”
Section: Introductionmentioning
confidence: 99%
“…Although most acoustic measures except the irregularity of the low-frequency subband signal did not show a substantial contribution to the classification between individuals with ALS and healthy controls, it by no means suggests that acoustic information is not useful in detecting speech impairment in ALS. In fact, previous studies have demonstrated success in using acoustic signals (e.g., melfrequency cepstral coefficients; filterbank energies) to differentiate individuals with ALS from healthy speakers (An et al, 2018;Gutz et al, 2019). Yet a limitation of these studies is that the acoustic features used to train the classification models are not interpretable from a physiologic perspective.…”
Section: Utility Of the Multimodal Framework In Detecting Speech Impairment In Alsmentioning
confidence: 99%
“…An et al [ 25 ] employed Convolutional Neural Networks to classify the intelligible speech of ALS patients compared to healthy people. Finally, Gutz et al [ 26 ] combined SVM and feature filtering techniques.…”
Section: Introductionmentioning
confidence: 99%