2014
DOI: 10.1121/1.4901712
|View full text |Cite
|
Sign up to set email alerts
|

Speech-cue transmission by an algorithm to increase consonant recognition in noise for hearing-impaired listeners

Abstract: Consonant recognition was assessed following extraction of speech from noise using a more efficient version of the speech-segregation algorithm described in Healy, Yoho, Wang, and Wang [(2013) J. Acoust. Soc. Am. 134, 3029-3038]. Substantial increases in recognition were observed following algorithm processing, which were significantly larger for hearing-impaired (HI) than for normalhearing (NH) listeners in both speech-shaped noise and babble backgrounds. As observed previously for sentence recognition, olde… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
27
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 28 publications
(28 citation statements)
references
References 35 publications
(62 reference statements)
1
27
0
Order By: Relevance
“…It is important to find the most relevant way of assessing the benefit from such separation algorithms, and even though many studies use objective metrics, such as the STOI and ESTOI (Jensen and Taal, 2016;Taal et al, 2011), the final assessment should be a speech recognition test on the target group of listeners. Earlier work on separation has used consonant recognition in hearing-impaired listeners as an outcome measure and found a benefit in both speech shaped and babble noises (Healy et al, 2014). This type of benefit was also confirmed in novel noise types, which is a basic requirement for successful application of such an algorithm (Healy et al, 2015).…”
Section: Introductionmentioning
confidence: 83%
“…It is important to find the most relevant way of assessing the benefit from such separation algorithms, and even though many studies use objective metrics, such as the STOI and ESTOI (Jensen and Taal, 2016;Taal et al, 2011), the final assessment should be a speech recognition test on the target group of listeners. Earlier work on separation has used consonant recognition in hearing-impaired listeners as an outcome measure and found a benefit in both speech shaped and babble noises (Healy et al, 2014). This type of benefit was also confirmed in novel noise types, which is a basic requirement for successful application of such an algorithm (Healy et al, 2015).…”
Section: Introductionmentioning
confidence: 83%
“…As described below, the algorithm tested currently differs from those employed in Healy et al (2013) and Healy et al (2014) in several aspects. Whereas the goal of the algorithms employed by Healy et al (2013) and Healy et al (2014) was to estimate the IBM, the current algorithm estimates the Ideal Ratio Mask (IRM; Srinivasan et al, 2006;Narayanan and Wang, 2013;Wang et al, 2014). To address the challenge of unseen noise segments, the current algorithm was trained using substantially longer noises, which were further expanded using a noise-perturbation technique (Chen et al, 2015).…”
Section: Methodsmentioning
confidence: 99%
“…A subsequent study (Healy et al, 2014) involved recognition of isolated consonants in order to identify the specific speech cues transmitted by the algorithm and the IBM. Consonant recognition in speech-shaped noise and babble was substantially increased by the algorithm for both NH and HI listeners, despite the lack of top-down cues associated with sentence recognition and the correspondingly increased reliance on bottom-up acoustic cues.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Using a similar approach, algorithms have been trained on labelled datasets to approximate the IBM. These have been reported to provide remarkably large SI improvements for NH listeners (Kim et al., 2009), HI-listeners (Healy et al., 2013, Healy et al., 2014) and CI users (Hu and Loizou, 2010) for speech in both stationary and non-stationary noise, even at low SNRs. However, these algorithms were trained and tested on datasets using the same speaker, background noise and SNRs.…”
Section: Introductionmentioning
confidence: 99%