Reading and speechreading are both visual skills based on speech and language processing. Here we explore individual differences in speechreading in profoundly prelingually deaf adults, hearing adults with a history of dyslexia, and hearing adults with no history of a literacy disorder. Speechreading skill distinguished the three groups: the deaf group were better speechreaders than hearing controls, who were better than the group with a history of dyslexia. The dyslexic group, while within range of hearing controls in terms of reading, nevertheless showed a residual deficit in speech/language processing when tested with silent speech. Within-group correlations suggested different associations between speechreading subtasks, reading and language skills. In particular, in the deaf and dyslexic groups, but not in the hearing controls, speechreading skill correlated with reading ability.
Purpose-We describe the development of a new Test of Child Speechreading (ToCS) specifically designed for use with deaf and hearing children. Speechreading is a skill which is required for deaf children to access the language of the hearing community. ToCS is a deaffriendly, computer-based test that measures child speechreading (silent lipreading) at three psycholinguistic levels: words, sentences and short stories. The aims of the study were to standardize ToCS with deaf and hearing children and investigate the effects of hearing status, age and linguistic complexity on speechreading ability.Method-86 severely and profoundly deaf and 91 hearing children aged between 5 and 14 years participated. The deaf children were from a range of language and communication backgrounds and their preferred mode of communication varied. Results: Speechreading skills significantly improved with age for both deaf and hearing children. There was no effect of hearing status on speechreading ability and deaf and hearing showed similar performance across all subtests on ToCS. Conclusions-TheTest of Child Speechreading (ToCS) is a valid and reliable assessment of speechreading ability in school-aged children that can be used to measure individual differences in performance in speechreading ability.Typical face-to-face communication is multi-modal and speech perception involves the integration of both auditory and visual information (Rosenblum, 2005). The integration of visual and auditory speech seems to occur very early on as young babies are not only sensitive to the visual component of speech (e.g. Dodd & Burnham, 1988;Kuhl & Meltzoff, 1982;Patterson & Werker, 1999) but can detect visual-auditory synchronisation (Dodd, 1979) and even match visual-auditory vowels (Patterson & Werker, 2003) Europe PMC Funders Author ManuscriptsEurope PMC Funders Author Manuscripts 1976). Importantly, McGurk effects have been observed in infants as young as 4.5 months using classic habituation and dishabituation paradigms (Burnham & Dodd, 2004;Rosenblum, Schmuckler, & Johnson, 1997). This suggests that visual speech contributes to speech processing even in pre-lingual children; thereby strengthening the argument that speechreading (visual-alone speech perception) is a natural part of speech processing (e.g. Massaro, 1987). Further support can also be found in recent evidence from neuroimaging studies suggesting that silent speechreading activates similar neural circuitry as audio-visual speech (e.g. Calvert, et al., 1997;Pekkola, et al., 2005).For many deaf and hearing-impaired individuals, speechreading is the main access to the spoken language of the hearing community and yet historically hearing people have often been reported as having at least equivalent, if not better, speechreading skills than deaf individuals (e.g. Arnold & Kopsel, 1996;Conrad, 1977;Green, Green, & Holmes, 1981;Massaro, 1987; Mogford, 1987). Most of these speechreading assessments were either designed to be used with hearing individuals and therefore contained complex...
Individual speechreading abilities have been linked with a range of cognitive and language-processing factors. The role of specifically visual abilities in relation to the processing of visible speech is less studied. Here we report that the detection of coherent visible motion in random-dot kinematogram displays is related to speechreading skill in deaf, but not in hearing, speechreaders. A control task requiring the detection of visual form showed no such relationship. Additionally, people born deaf were better speechreaders than hearing people on a new test of silent speechreading.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.