2020
DOI: 10.1121/10.0000563
|View full text |Cite
|
Sign up to set email alerts
|

Sentence perception in noise by hearing-aid users predicted by syllable-constituent perception and the use of context

Abstract: Masked sentence perception by hearing-aid users is strongly correlated with three variables: (1) the ability to hear phonetic details as estimated by the identification of syllable constituents in quiet or in noise; (2) the ability to use situational context that is extrinsic to the speech signal; and (3) the ability to use inherent context provided by the speech signal itself. This approach is called "the syllable-constituent, contextual theory of speech perception" and is supported by the performance of 57 h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Second, processing the continuous flow of speech cues in a conversation may be affected more by an individual's working memory and speech processing speed, compared to phoneme recognition in isolated syllables (Baddeley, 2012). Finally, the lexical, semantic, and syntactic information in meaningful words and sentences provide speech information, even if portions of the speech signal are inaudible or masked by noise (Boothroyd & Nittrouer, 1988; Bronkhorst et al, 2002; Miller et al, 2020). Differences like these confound the translation of phoneme recognition from isolated nonsense syllables to words and from words to connected speech.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Second, processing the continuous flow of speech cues in a conversation may be affected more by an individual's working memory and speech processing speed, compared to phoneme recognition in isolated syllables (Baddeley, 2012). Finally, the lexical, semantic, and syntactic information in meaningful words and sentences provide speech information, even if portions of the speech signal are inaudible or masked by noise (Boothroyd & Nittrouer, 1988; Bronkhorst et al, 2002; Miller et al, 2020). Differences like these confound the translation of phoneme recognition from isolated nonsense syllables to words and from words to connected speech.…”
mentioning
confidence: 99%
“…The results of this study are expected to provide insights into how consonant cues are perceived by eye (e.g., speechreading) and by ear when the consonants are presented in a sequence of syllables spoken at a conversational rate. Such information may be helpful in not only designing speech enhancement strategies for automatic speech recognition and hearing prostheses, but also in improving models that translate phoneme recognition scores to word recognition scores or sentence recognition scores (Boothroyd & Nittrouer, 1988; Bronkhorst et al, 2002; Miller et al, 2020). Current models use phoneme recognition scores measured using isolated syllables or words as the baseline, and therefore may be underestimating the benefit of context when considering the role of phoneme recognition in understanding connected speech.…”
mentioning
confidence: 99%