Among the most prevailing assumptions in science and society about the human reading process is that sound and sound-based phonology are critical to young readers. The child's sound-to-letter decoding is viewed as universal and vital to deriving meaning from print. We offer a different view. The crucial link for early reading success is not between segmental sounds and print. Instead the human brain's capacity to segment, categorize, and discern linguistic patterning makes possible the capacity to segment all languages. This biological process includes the segmentation of languages on the hands in signed languages. Exposure to natural sign language in early life equally affords the child's discovery of silent segmental units in visual sign phonology (VSP) that can also facilitate segmental decoding of print. We consider powerful biological evidence about the brain, how it builds sound and sign phonology, and why sound and sign phonology are equally important in language learning and reading. We offer a testable theoretical account, reading model, and predictions about how VSP can facilitate segmentation and mapping between print and meaning. We explain how VSP can be a powerful facilitator of all children's reading success (deaf and hearing)-an account with profound transformative impact on learning to read in deaf children with different language backgrounds. The existence of VSP has important implications for understanding core properties of all human language and reading, challenges assumptions about language and reading as being tied to sound, and provides novel insight into a remarkable biological equivalence in signed and spoken languages. WIREs Cogn Sci 2016, 7:366-381. doi: 10.1002/wcs.1404 For further resources related to this article, please visit the WIREs website.
Studies have shown that American Sign Language (ASL) fluency has a positive impact on deaf individuals’ English reading, but the cognitive and cross-linguistic mechanisms permitting the mapping of a visual-manual language onto a sound-based language have yet to be elucidated. Fingerspelling, which represents English orthography with 26 distinct hand configurations, is an integral part of ASL and has been suggested to provide deaf bilinguals with important cross-linguistic links between sign language and orthography. Using a hierarchical multiple regression analysis, this study examined the relationship of age of ASL exposure, ASL fluency, and fingerspelling skill on reading fluency in deaf college-age bilinguals. After controlling for ASL fluency, fingerspelling skill significantly predicted reading fluency, revealing for the first-time that fingerspelling, above and beyond ASL skills, contributes to reading fluency in deaf bilinguals. We suggest that both fingerspelling—in the visual-manual modality—and reading—in the visual-orthographic modality—are mutually facilitating because they share common underlying cognitive capacities of word decoding accuracy and automaticity of word recognition. The findings provide support for the hypothesis that the development of English reading proficiency may be facilitated through strengthening of the relationship among fingerspelling, sign language, and orthographic decoding en route to reading mastery, and may also reveal optimal approaches for reading instruction for deaf and hard of hearing children.
This study investigated the acquisition of depicting signs (DS) among students learning a signed language as their second-modality and second-language (M2L2) language. Depicting signs, broadly described, illustrate actions and states. This study sample includes 75 M2L2 students who were recruited from college-level American Sign Language (ASL) courses who watched and described three short clips from Canary Row the best they could in ASL. Four types of DS were coded in the students' videorecorded retellings: (1) entity depicting signs (EDS); (2) body part depicting signs (BPDS); (3) handling depicting signs (HDS); and (4) size-and-shape specifiers (SASS). Results revealed that SASS and HDS increase in instances as students advance in their ASL learning and comprehension. However, EDS expressions did not have a relationship with their ASL comprehension. ASL 2 students produced less DS than the ASL 1 students but did not differ from the ASL 3+ students. There were no differences in instances of BPDS among the three groups of L2 learners although their ability to produce BPDS was correlated with their ASL comprehension. This study is the first to systematically elicit depicting signs from M2L2 learners in a narrative context. The results have important implications for the field of sign language pedagogy and instruction. Future research, particularly cross-sectional and/or longitudinal studies, is needed to explore the trajectory of the acquisition of DS and identify evidence-based pedagogical approaches for teaching depicting signs to M2L2 students.
The cover image, created by Laura‐Ann Petitto, is based on the article Visual Sign Phonology: Insights into Human Reading and Language from a Natural Soundless Phonology, DOI: . Design Credit: Laura‐Ann Petitto, Graphic Artist Yiqiao Wang, and Tara Congdon.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.