Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility 2013
DOI: 10.1145/2513383.2517032
|View full text |Cite
|
Sign up to set email alerts
|

Audio-visual speech understanding in simulated telephony applications by individuals with hearing loss

Abstract: We present a study into the effects of the addition of a video channel, video frame rate, and audio-video synchrony, on the ability of people with hearing loss to understand spoken language during video telephone conversations. Analysis indicates that higher frame rates result in a significant improvement in speech understanding, even when audio and video are not perfectly synchronized. At lower frame rates, audio-video synchrony is critical: if the audio is perceived 100 ms ahead of video, understanding drops… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…Even though being able to see a talker’s face during video conferencing can be helpful to improve speech understanding (Gosselin & Gagné 2011; Devesse et al 2018), the degraded facial cues or auditory cues and even mismatch of audio-video information could still be misleading and difficult for this population. For example, Kozma-Spytek et al (2013) reported that, for people with hearing loss to understand spoken language in videotelephony, even small audio-video asynchrony can lead to great negative impact. Therefore, it warrants more investigation to study the text benefit in listeners with hearing loss when there are three sources of information in two modalities (visual text supplementation, visual facial cues, and auditory signal).…”
Section: Discussionmentioning
confidence: 99%
“…Even though being able to see a talker’s face during video conferencing can be helpful to improve speech understanding (Gosselin & Gagné 2011; Devesse et al 2018), the degraded facial cues or auditory cues and even mismatch of audio-video information could still be misleading and difficult for this population. For example, Kozma-Spytek et al (2013) reported that, for people with hearing loss to understand spoken language in videotelephony, even small audio-video asynchrony can lead to great negative impact. Therefore, it warrants more investigation to study the text benefit in listeners with hearing loss when there are three sources of information in two modalities (visual text supplementation, visual facial cues, and auditory signal).…”
Section: Discussionmentioning
confidence: 99%
“…Often, these activities take place in settings referred to as hackerspaces, which enable communities and groups to have a physical space to bring people together in implementing ideas [41,72]. Individuals often take part in these activities for purposes other than financial gain [40] and can share these designs online and opensource [11]. DIY and hacking should not simply be perceived as a hobbyist or leisure practice, but as a professionalizing field functioning in parallel to research and industry labs [41].…”
Section: Diy and Hacking Assistive Technologiesmentioning
confidence: 99%