Several studies have demonstrated that when talkers are instructed to speak clearly, the resulting speech is significantly more intelligible than speech produced in ordinary conversation. These speech intelligibility improvements are accompanied by a wide variety of acoustic changes. The current study explored the relationship between acoustic properties of vowels and their identification in clear and conversational speech, for young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. Monosyllabic words excised from sentences spoken either clearly or conversationally by a male talker were presented in 12-talker babble for vowel identification. While vowel intelligibility was significantly higher in clear speech than in conversational speech for the YNH listeners, no clear speech advantage was found for the EHI group. Regression analyses were used to assess the relative importance of spectral target, dynamic formant movement, and duration information for perception of individual vowels. For both listener groups, all three types of information emerged as primary cues to vowel identity. However, the relative importance of the three cues for individual vowels differed greatly for the YNH and EHI listeners. This suggests that hearing loss alters the way acoustic cues are used for identifying vowels.
Although there has been keen interest in the association among measures of sensory function and cognitive function for many years, in general, measures of sensory function have been confined to one or two senses and measures of threshold sensitivity (acuity). In this study, rigorous psychophysical measures of threshold sensitivity, temporal gap detection, temporal order identification, and temporal masking have been obtained, in hearing, vision, and touch. In addition, all subjects completed 15 subtests of the Wechsler Adult Intelligence Scale, 3rd edition (WAIS–III). Data were obtained from 245 adults (18–87 years old) for the WAIS–III and for 40 measures of threshold sensitivity and temporal processing. The focus in this report is on individual differences in performance for the entire data set. Principal-components (PC) factor analysis reduced the 40 psychophysical measures to eight correlated factors, which were reduced further to a single global sensory processing factor. Similarly, PC factor analysis of the 15 WAIS–III scores resulted in three correlated factors that were further reduced to a single global cognitive function factor. Age, global sensory processing, and global cognitive function were all moderately and significantly correlated with one another. However, paired partial correlations, controlling for the third of these three measures, revealed that the moderate correlation between age and global cognitive function went to zero when global sensory processing was controlled for; the other two partial correlations remained intact. Structural models confirmed this result. These analyses suggest that the long-standing observation of age-related changes in cognitive function may be mediated by age-related changes in global sensory processing.
These results suggest that acoustic vowel space expansion and large vowel duration increases improve vowel intelligibility. In contrast, changing the dynamic characteristics of vowels seems not to contribute to improved clear speech vowel intelligibility. However, talker variability suggested that improved vowel intelligibility can be achieved using a variety of clear speech strategies, including some apparently not measured here.
The purpose of this study was to examine the contribution of information provided by vowels versus consonants to sentence intelligibility in young normal-hearing (YNH) and typical elderly hearing-impaired (EHI) listeners. Sentences were presented in three conditions, unaltered or with either the vowels or the consonants replaced with speech shaped noise. Sentences from male and female talkers in the TIMIT database were selected. Baseline performance was established at a 70 dB SPL level using YNH listeners. Subsequently EHI and YNH participants listened at 95 dB SPL. Participants listened to each sentence twice and were asked to repeat the entire sentence after each presentation. Words were scored correct if identified exactly. Average performance for unaltered sentences was greater than 94%. Overall, EHI listeners performed more poorly than YNH listeners. However, vowel-only sentences were always significantly more intelligible than consonant-only sentences, usually by a ratio of 2:1 across groups. In contrast to written English or words spoken in isolation, these results demonstrated that for spoken sentences, vowels carry more information about sentence intelligibility than consonants for both young normal-hearing and elderly hearing-impaired listeners.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.