For more than a century we have understood that our brain's left hemisphere is the primary site for processing language, yet why this is so has remained more elusive. Using positron emission tomography, we report cerebral blood flow activity in profoundly deaf signers processing specific aspects of sign language in key brain sites widely assumed to be unimodal speech or sound processing areas: the left inferior frontal cortex when signers produced meaningful signs, and the planum temporale bilaterally when they viewed signs or meaningless parts of signs (sign-phonetic and syllabic units). Contrary to prevailing wisdom, the planum temporale may not be exclusively dedicated to processing speech sounds, but may be specialized for processing more abstract properties essential to language that can engage multiple modalities. We hypothesize that the neural tissue involved in language processing may not be prespecified exclusively by sensory modality (such as sound) but may entail polymodal neural tissue that has evolved unique sensitivity to aspects of the patterning of natural language. Such neural specialization for aspects of language patterning appears to be neurally unmodifiable in so far as languages with radically different sensory modalities such as speech and sign are processed at similar brain sites, while, at the same time, the neural pathways for expressing and perceiving natural language appear to be neurally highly modifiable.T he left hemisphere of the human brain has been understood to be the primary site of language processing for more than 100 years, with the key prevailing question being why is this so; what is the driving force behind such organization? Recent functional imaging studies of the brain have provided powerful support for the view that specific language functions and specific brain sites are uniquely linked, including those demonstrating increased regional cerebral blood flow (rCBF) in portions of the left superior and middle temporal gyri as well as the left premotor cortex when processing speech sounds (1-7), and the left inferior frontal cortex (LIFC) when searching, retrieving, and generating information about spoken words (8-10). The view that language functions processed at specific lefthemisphere sites reflects its dedication to the motor articulation of speaking or the sensory processing of hearing speech and sound is particularly evident regarding the left planum temporale (PT), which participates in the processing of meaningless phonetic-syllabic units that form the basis of all words and sentences in human language. This PT region of the superior temporal gyrus (STG) forms part of the classically defined Wernicke's receptive language area (11, 12), receives projections from the auditory afferent system (13,14), and is considered to constitute unimodal secondary auditory cortex in both structure and function based on cytoarchitectonic, chemoarchitectonic, and connectivity criteria. The prevailing fundamental problem, however, is whether the brain sites involved in language pro...
Divergent hypotheses exist concerning the types of knowledge underlying early bilingualism, with some portraying a troubled course marred by language delays and confusion, and others portraying one that is largely unremarkable. We studied the extraordinary case of bilingual acquisition across two modalities to examine these hypotheses. Three children acquiring Langues des Signes Québécoise and French, and three children acquiring French and English (ages at onset approximately 1;0, 2;6 and 3;6 per group) were videotaped regularly over one year while we empirically manipulated novel and familiar speakers of each child's two languages. The results revealed that both groups achieved their early linguistic milestones in each of their languages at the same time (and similarly to monolinguals), produced a substantial number of semantically corresponding words in each of their two languages from their very first words or signs (translation equivalents), and demonstrated sensitivity to the interlocutor's language by altering their language choices. Children did mix their languages to varying degrees, and some persisted in using a language that was not the primary language of the addressee, but the propensity to do both was directly related to their parents' mixing rates, in combination with their own developing language preference. The signing-speaking bilinguals did exploit the modality possibilities, and they did simultaneously mix their signs and speech, but in semantically principled and highly constrained ways. It is concluded that the capacity to differentiate between two languages is well in place prior to first words, and it is hypothesized that this capacity may result from biological mechanisms that permit the discovery of early phonological representations. Reasons why paradoxical views of bilingual acquisition have persisted are also offered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.