2015
DOI: 10.1016/j.jml.2015.06.008
|View full text |Cite
|
Sign up to set email alerts
|

Turning a blind eye to the lexicon: ERPs show no cross-talk between lip-read and lexical context during speech sound processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

3
23
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(27 citation statements)
references
References 74 publications
(116 reference statements)
3
23
1
Order By: Relevance
“…A versus AV data were analyzed for 93 individuals who had participated in the studies listed in Table . Peak amplitudes and latencies were determined for 75 of those (as mentioned above, the data from Baart & Samuel, , was excluded from peak analyses), but the mean N1 and P2 amplitudes in 50‐ms windows were calculated for all 93 ID ERPs. The N1 and P2 peak amplitude/latency differences mirrored the pattern of the GA analyses (see Table and Figure a,b) as lip‐read speech had suppressed the auditory N1 and P2, t s(74) > 3.27, p s one‐tailed < .001, with no statistical difference between amplitude suppression at the N1 and P2, t (74) = 1.50, p = .138.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…A versus AV data were analyzed for 93 individuals who had participated in the studies listed in Table . Peak amplitudes and latencies were determined for 75 of those (as mentioned above, the data from Baart & Samuel, , was excluded from peak analyses), but the mean N1 and P2 amplitudes in 50‐ms windows were calculated for all 93 ID ERPs. The N1 and P2 peak amplitude/latency differences mirrored the pattern of the GA analyses (see Table and Figure a,b) as lip‐read speech had suppressed the auditory N1 and P2, t s(74) > 3.27, p s one‐tailed < .001, with no statistical difference between amplitude suppression at the N1 and P2, t (74) = 1.50, p = .138.…”
Section: Resultsmentioning
confidence: 99%
“…For example, van Wassenhove and colleagues (2005) found that lip‐read information suppressed the amplitude of the auditory N1 and P2 and sped up both peaks. In contrast, others observed no lip‐read‐induced suppression of the N1 (e.g., Baart & Samuel, ; Frtusova, Winneke, & Phillips, ) or the P2 (see, e.g., Figure 2 in Treille, Vilain, & Sato, ), or no latency effect at the N1 (e.g., Kaganovich & Schumaker, ) or P2 (e.g., Stekelenburg & Vroomen, ).…”
mentioning
confidence: 97%
“…However, the influence of visual speech cues on lexical processing remains debated (e.g., Barutchu et al, 2008;Dekle et al, 1992;Sams et al, 1998;Windmann, 2004). Importantly, in an ERP study, Baart and Samuel (2015) did not observe any interaction between the effect of visual speech and lexicality. In that experiment, participants received 3-syllabic words and pseudowords in auditory-only, visual-only, and audiovisual modalities.…”
Section: How Does the Speech Modality Modulate Contact With The Lexicon?mentioning
confidence: 96%
“…However, the two factors did not affect each other’s degree of influence, and occurred at the same time points. Although Baart and Samuel (2015) did not test incongruent AV stimuli, their results suggest that lexical access and the integration of auditory and visual signals might, in certain circumstances, occur in parallel.…”
mentioning
confidence: 91%
“…There are some hints in prior research supporting this hypothesis. For example, Baart and Samuel (2015) presented subjects with spoken words and nonwords that differed at the onset of the third syllable (like “banana” and “banaba”). Additionally, the third syllable was either presented auditory-only, visual-only (i.e., mouthed), or auditory-visual.…”
mentioning
confidence: 99%