2017
DOI: 10.3389/fpsyg.2017.02051
|View full text |Cite
|
Sign up to set email alerts
|

Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

Abstract: Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Man… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
17
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(18 citation statements)
references
References 78 publications
1
17
0
Order By: Relevance
“…As noise decreases the available acoustic information in the speech signal, it might be more difficult for non‐native listeners to make a phonological mapping between the speech signal and perceptual/linguistic representations, as these might have not been fully tuned to the non‐native language (Flege, ; Iverson et al, ; Lecumberri et al, ). Specifically in such situations, visual phonological information that is conveyed by visible speech has been shown to enhance non‐native language learning and comprehension (Hannah et al, ; Jongman, Wang, & Kim, ; Kawase, Hannah, & Wang, ; Kim, Sonic, & Davis, ; Wang, Behne, & Jiang, ). In native listeners, it has been suggested that visual attention is more often directed to the mouth of a talker to extract more information from visible speech when speech is degraded (Buchan, Paré, & Munhall, ; Król, ; Munhall, ; Rennig, Wegner‐Clemens, & Beauchamp, ).…”
Section: Introductionmentioning
confidence: 99%
“…As noise decreases the available acoustic information in the speech signal, it might be more difficult for non‐native listeners to make a phonological mapping between the speech signal and perceptual/linguistic representations, as these might have not been fully tuned to the non‐native language (Flege, ; Iverson et al, ; Lecumberri et al, ). Specifically in such situations, visual phonological information that is conveyed by visible speech has been shown to enhance non‐native language learning and comprehension (Hannah et al, ; Jongman, Wang, & Kim, ; Kawase, Hannah, & Wang, ; Kim, Sonic, & Davis, ; Wang, Behne, & Jiang, ). In native listeners, it has been suggested that visual attention is more often directed to the mouth of a talker to extract more information from visible speech when speech is degraded (Buchan, Paré, & Munhall, ; Król, ; Munhall, ; Rennig, Wegner‐Clemens, & Beauchamp, ).…”
Section: Introductionmentioning
confidence: 99%
“…For example, Kelly et al (2009) investigated how semantic congruence of gesture and speech affected the learning of L2 Japanese vocabulary in native English speakers. Results from a free recall and recognition test showed that compared to speech alone, congruent gestures enhanced memory and incongruent gesture disrupted it (and see Hannah et al, 2017 , for a similar effect in L2 phonetic processing). Based on research in this vein, Macedonia (2014) makes a strong case for why hand gestures should be a bigger part of the L2 classroom and language education more generally.…”
Section: Introductionmentioning
confidence: 87%
“…Many of the experiments on this topic have focused on how L2 learners attend to information conveyed through the hands when perceiving novel speech sounds ( Hannah et al, 2017 ; Kelly, 2017 ; Kushch et al, 2018 ; Baills et al, 2019 ; Hoetjes et al, 2019 ) and comprehending new vocabulary ( Allen, 1995 ; Sueyoshi and Hardison, 2005 ; Sime, 2006 ; Kelly et al, 2009 ; Morett, 2014 ; Morett and Chang, 2015 ; Baills et al, 2019 ; Huang et al, 2019 ). For example, Kelly et al (2009) investigated how semantic congruence of gesture and speech affected the learning of L2 Japanese vocabulary in native English speakers.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…By 4–6 months of age, infants in spite of their reduced visual processing can discriminate their native language from other languages partly by relying on visual cues accompanying gestures such as vocalic lip rounding (Weikum et al, 2007 ). In comparison, visual cues to tonal gestures are weak and unreliable to native listeners (Chen and Massaro, 2008 ; Hannah et al, 2017 ). Young infants (4-month-olds) can detect different emotions (happy, angry, sad) when presented with facial-vocal cues (Flom and Bahrick, 2007 ), an ability emerging prior to affect detection based on unimodal cues (Walker-Andrews, 1997 ).…”
mentioning
confidence: 99%