Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility 2017
DOI: 10.1145/3132525.3134819
|View full text |Cite
|
Sign up to set email alerts
|

Feasibility of Using Automatic Speech Recognition with Voices of Deaf and Hard-of-Hearing Individuals

Abstract: Many personal devices have transitioned from visual-controlled interfaces to speech-controlled interfaces to reduce costs and interactive friction, supported by the rapid growth in capabilities of speech-controlled interfaces, e.g., Amazon Echo or Apple's Siri. A consequence is that people who are deaf or hard of hearing (DHH) may be unable to use these speechcontrolled devices. We show that deaf speech has a high error rate compared to hearing speech, in commercial speechcontrolled interfaces. Deaf speech had… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(8 citation statements)
references
References 3 publications
1
7
0
Order By: Relevance
“…In addition, our study suggests that the roots of usability and accessibility challenges for other populations--like older adults, children, people with Alzheimer's, and people with intellectual disabilities--may be traced back to VAPA design guidelines. Prior work finds that speech recognition often breaks down for children and older adults [3,21] as well as people who are deaf or hard-of-hearing [16], because there is wide variation in pitch, pronunciation, patterns of stress, and intersyllabic pauses, leading to higher recognition error rates [16,21,35]. Prior work also finds that the timeout period for speech input is often inadequate for people with Alzheimer's [34] and intellectual disabilities [3].…”
Section: Putting Disability Back Into the Ideal Humanmentioning
confidence: 99%
“…In addition, our study suggests that the roots of usability and accessibility challenges for other populations--like older adults, children, people with Alzheimer's, and people with intellectual disabilities--may be traced back to VAPA design guidelines. Prior work finds that speech recognition often breaks down for children and older adults [3,21] as well as people who are deaf or hard-of-hearing [16], because there is wide variation in pitch, pronunciation, patterns of stress, and intersyllabic pauses, leading to higher recognition error rates [16,21,35]. Prior work also finds that the timeout period for speech input is often inadequate for people with Alzheimer's [34] and intellectual disabilities [3].…”
Section: Putting Disability Back Into the Ideal Humanmentioning
confidence: 99%
“…In [6], 45 audio files were chosen by a naive listener. 15 samples were rated "good", 16 samples were "fine" and 14 samples were "bad".…”
Section: Methodology Audio Datasetmentioning
confidence: 99%
“…Glasser, Kushalnagar, and Kushalnagar did a preliminary study on using Deaf and Hard-of-Hearing (DHH) speech [6]. However, there have been significant advances in ASR since then.…”
Section: Introduction and Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…ASR models trained on a large training dataset have achieved excellent results and are used in personal phones, IoT (Internet of Things) devices, and cloud services, examples of which include Alexa, Siri, and Bixby. However, non-standard speech, such as amyotrophic lateral sclerosis (ALS) speech, Parkinson's speech, and cochlear implant (CI) patients' speech, has a low recognition rate, whereas ASR models are trained using standard speech data sets [4,5]. Therefore, people with non-standard speech cannot use ASR models trained with a standard speech dataset.…”
Section: Introductionmentioning
confidence: 99%