Proceedings of the Eighth Workshop on Speech and Language Processing for Assistive Technologies 2019
DOI: 10.18653/v1/w19-1704
|View full text |Cite
|
Sign up to set email alerts
|

Speech-based Estimation of Bulbar Regression in Amyotrophic Lateral Sclerosis

Abstract: Amyotrophic Lateral Sclerosis (ALS) is a progressive neurological disease that leads to degeneration of motor neurons and, as a result, inhibits the ability of the brain to control muscle movements. Monitoring the progression of ALS is of fundamental importance due to the wide variability in disease outlook that exists across patients. This progression is typically tracked using the ALS functional rating scale-revised (ALSFRS-R), which is the current clinical assessment of a patient's level of functional impai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 27 publications
1
7
0
Order By: Relevance
“…In clinical practice, speaking rate (number of words produced per minute, wpm) and speech intelligibility (the percentage of words that are understood by a listener) are two standards currently used to assess overall speech performance of patients with dysarthria [11]. While these measures are essential for monitoring disability and treatment planning, symptoms of bulbar dysfunction are subtle early in disease progression and often manifest before perceptual characteristics are detectable [6, 12]. Although current measures are clinically useful, it has been suggested that instrumental or performance-based measures are necessary for detecting early changes in bulbar motor function [13].…”
Section: Introductionmentioning
confidence: 99%
“…In clinical practice, speaking rate (number of words produced per minute, wpm) and speech intelligibility (the percentage of words that are understood by a listener) are two standards currently used to assess overall speech performance of patients with dysarthria [11]. While these measures are essential for monitoring disability and treatment planning, symptoms of bulbar dysfunction are subtle early in disease progression and often manifest before perceptual characteristics are detectable [6, 12]. Although current measures are clinically useful, it has been suggested that instrumental or performance-based measures are necessary for detecting early changes in bulbar motor function [13].…”
Section: Introductionmentioning
confidence: 99%
“…In most of the works, they use combinations of (x, y) parameters (Wang et al , 2015) or (y, z) parameters (Wang et al , 2016a) for speech analysis and these are performed for normal speakers. In few words, the authors have used (x, y, z) parameters of tongue and lip sensors (Wang et al , 2016b; Wisler et al , 2019) to perform articulatory analyzes. In Wang et al (2016b), articulatory data obtained from (x, y, z) parameters of the tongue tip, tongue back, upper and lower lips are used along with acoustic features to improve the performance of the ASR system for amyotrophic lateral sclerosis (ALS) subjects.…”
Section: Introductionmentioning
confidence: 99%
“…In Wang et al (2016b), articulatory data obtained from (x, y, z) parameters of the tongue tip, tongue back, upper and lower lips are used along with acoustic features to improve the performance of the ASR system for amyotrophic lateral sclerosis (ALS) subjects. Wisler et al (2019) used articulatory motions of tongue and lip sensors to estimate ALS functional rating scale – revised (ALSFRS-R) bulbar subscore to monitor the progression of ALS. The inclusion of articulatory data along with the acoustic data improved the performance of the support vector regression model.…”
Section: Introductionmentioning
confidence: 99%
“…Machine learning (ML) approaches that are based on a large number of acoustic speech features might be particularly well suited to detect speech changes despite the large heterogeneity in speech symptoms across patients. The studies in [18]- [23] have demonstrated that ML can detect ALS and monitor its progression based on individual speech samples. Like [18]- [23], our approach used large-space acoustic features extracted using OpenSMILE [24].…”
Section: Introductionmentioning
confidence: 99%
“…The studies in [18]- [23] have demonstrated that ML can detect ALS and monitor its progression based on individual speech samples. Like [18]- [23], our approach used large-space acoustic features extracted using OpenSMILE [24].…”
Section: Introductionmentioning
confidence: 99%