2018
DOI: 10.1002/hbm.23987
|View full text |Cite
|
Sign up to set email alerts
|

Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension

Abstract: During face‐to‐face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and compl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

9
62
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 56 publications
(72 citation statements)
references
References 65 publications
9
62
1
Order By: Relevance
“…In line with our hypotheses, both native and non‐native listeners demonstrated a clear gestural enhancement effect on the cued–recall task (following Drijvers & Özyürek, , ; Drijvers et al, , ). This gestural enhancement effect was the largest when speech was degraded in both native and non‐native listeners.…”
Section: Discussionsupporting
confidence: 86%
See 3 more Smart Citations
“…In line with our hypotheses, both native and non‐native listeners demonstrated a clear gestural enhancement effect on the cued–recall task (following Drijvers & Özyürek, , ; Drijvers et al, , ). This gestural enhancement effect was the largest when speech was degraded in both native and non‐native listeners.…”
Section: Discussionsupporting
confidence: 86%
“…This N400 effect in clear speech was larger for non‐native listeners than native listeners, which might indicate that they focus more strongly on gestures than native listeners, to extract semantic information to aid comprehension. Similarly, previous neuroimaging work has indicated that both native and non‐native listeners engage their visual cortex more when speech is degraded and a gesture is present than when speech is clear or no gesture is present, possibly to allocate more visual attention to gestures and increase information uptake (Drijvers et al, , ). Non‐native listeners, however, engage areas involved in semantic retrieval and semantic unification, and visible speech processing less than native listeners during gestural enhancement of degraded speech, suggesting that non‐native listeners might be hindered in integrating the degraded phonological cues with the semantic information conveyed by the gesture.…”
Section: Introductionmentioning
confidence: 76%
See 2 more Smart Citations
“…A 4A Hann window of 2s in width was selected so that the signal could be carefully examined. The area under each power spectrum for each condition was calculated for the following frequency bands: 4-8Hz (θ), 8-12Hz (α), 12-25Hz (β), 25-60Hz (low γ) and 60-90Hz (high γ) (26, 27). …”
Section: Methodsmentioning
confidence: 99%