2004
DOI: 10.1016/j.neuron.2004.06.025
|View full text |Cite
|
Sign up to set email alerts
|

Integration of Letters and Speech Sounds in the Human Brain

Abstract: Most people acquire literacy skills with remarkable ease, even though the human brain is not evolutionarily adapted to this relatively new cultural phenomenon. Associations between letters and speech sounds form the basis of reading in alphabetic scripts. We investigated the functional neuroanatomy of the integration of letters and speech sounds using functional magnetic resonance imaging (fMRI). Letters and speech sounds were presented unimodally and bimodally in congruent or incongruent combinations. Analysi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

64
459
3
3

Year Published

2005
2005
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 474 publications
(529 citation statements)
references
References 47 publications
64
459
3
3
Order By: Relevance
“…Previous studies have shown that posterior STS responds more to multisensory auditory-visual stimuli than to unisensory auditory or visual stimuli (Beauchamp et al, 2004b;Calvert, 2001;Hein et al, 2007;Noesselt et al, 2007;Raij et al, 2000;Van Atteveldt et al, 2004). Consistent with these results, we observed a larger response for multisensory auditory-tactile stimuli than unisensory auditory or tactile stimulation.…”
Section: Multisensory Integration In Stsmssupporting
confidence: 91%
See 2 more Smart Citations
“…Previous studies have shown that posterior STS responds more to multisensory auditory-visual stimuli than to unisensory auditory or visual stimuli (Beauchamp et al, 2004b;Calvert, 2001;Hein et al, 2007;Noesselt et al, 2007;Raij et al, 2000;Van Atteveldt et al, 2004). Consistent with these results, we observed a larger response for multisensory auditory-tactile stimuli than unisensory auditory or tactile stimulation.…”
Section: Multisensory Integration In Stsmssupporting
confidence: 91%
“…That is, the response to auditory-tactile stimuli was greater than the response to auditory or tactile stimuli in isolation, but was not greater than the summed response to auditory and tactile unisensory stimuli (Stein and Meredith, 1993). Previous fMRI studies of auditory-visual integration in STS (Beauchamp et al, 2004a;Beauchamp et al, 2004b;Hein et al, 2007;Van Atteveldt et al, 2004;van Atteveldt et al, 2007) and auditory-tactile integration in auditory cortex (Kayser et al, 2005) have also not observed super-additive changes in the BOLD signal, perhaps because only a few single neurons show super-additivity (Laurienti et al, 2005;Perrault et al, 2005). Supporting this idea, in single-unit recording studies, only a small fraction of STP neurons respond to both auditory and tactile stimulation (Bruce et al, 1981;Hikosaka et al, 1988); the same is true in multisensory regions of cat cortex (Clemo et al, 2007).…”
Section: Multisensory Integration In Stsmsmentioning
confidence: 82%
See 1 more Smart Citation
“…This holds also for bilateral STG activation (e.g., Early Phonological Activation in Visual Word Recognition Booth et al, 2002a;Tan et al, 2005). Activity in the STG was reported in response to individual speech sounds and letters (van Atteveldt et al, 2004) and to written and spoken narratives (Spitsyna et al, 2006) suggesting heteromodal processing and an involvement of the STG in cross-modal integration and multisensory convergence. Booth et al (2002a) also reported heteromodal STG activity for spoken words and visual rhyming.…”
Section: Early Phonological Activation In Visual Word Recognitionmentioning
confidence: 71%
“…However, due to the problems concerning motion and other artifacts associated with speaking in the scanner, only recently have fMRI studies used overt speech in fMRI [e.g., Barch et al, 2000;De Zubicaray et al, 2001;Kan and Thompson-Schill, 2004;Palmer et al, 2001;Shuster and Lemieux, 2005]. In this study, we used overt speech in combination with a clustered acquisition protocol [e.g., De Zubicaray, 2001;Jäncke et al, 2002;Van Atteveldt et al, 2004] for stimulus presentation and speech production to take place in the silent interval between scans. By avoiding scanning during speaking, a considerable reduction in motion-related artifacts is achieved [Birn et al, 2004;Gracco et al, 2005].…”
Section: Introductionmentioning
confidence: 99%