2019
DOI: 10.1177/0956797619841813
|View full text |Cite
|
Sign up to set email alerts
|

Electrophysiological Evidence for Top-Down Lexical Influences on Early Speech Perception

Abstract: An unresolved issue in speech perception concerns whether top-down linguistic information influences perceptual responses. We addressed this issue using the event-related-potential technique in two experiments that measured cross-modal sequential-semantic priming effects on the auditory N1, an index of acoustic-cue encoding. Participants heard auditory targets (e.g., “potatoes”) following associated visual primes (e.g., “MASHED”), neutral visual primes (e.g., “FACE”), or a visual mask (e.g., “XXXX”). Auditory … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
31
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(35 citation statements)
references
References 38 publications
3
31
1
Order By: Relevance
“…Non-auditory cues can also be used to facilitate categorical perception. For example, observing the mouth movements of a speaker or seeing a written representation of a word prior to the obscured sound both facilitated perceptual performance (Sohoglu et al, 2012;2014;Getz and Toscano, 2019;Pinto et al, 2019). For example, providing a written example of a semantically-associated word (e.g., "MASHED") prior to an acoustic representation of a word with ambiguous voice onset time (e.g., "potatoes"), facilitated categorical perception of the initial consonant more than unrelated visual primes (Getz and Toscano, 2019).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Non-auditory cues can also be used to facilitate categorical perception. For example, observing the mouth movements of a speaker or seeing a written representation of a word prior to the obscured sound both facilitated perceptual performance (Sohoglu et al, 2012;2014;Getz and Toscano, 2019;Pinto et al, 2019). For example, providing a written example of a semantically-associated word (e.g., "MASHED") prior to an acoustic representation of a word with ambiguous voice onset time (e.g., "potatoes"), facilitated categorical perception of the initial consonant more than unrelated visual primes (Getz and Toscano, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…For example, observing the mouth movements of a speaker or seeing a written representation of a word prior to the obscured sound both facilitated perceptual performance (Sohoglu et al, 2012;2014;Getz and Toscano, 2019;Pinto et al, 2019). For example, providing a written example of a semantically-associated word (e.g., "MASHED") prior to an acoustic representation of a word with ambiguous voice onset time (e.g., "potatoes"), facilitated categorical perception of the initial consonant more than unrelated visual primes (Getz and Toscano, 2019). This use of cross-modal semantic priming modulated the earliest electroencephalography (EEG) peak examined by the investigators, the N1 peak, thought to be related to primary auditory cortex activation (Hillyard et al, 1973;Näätänen and Picton, 1987).…”
Section: Introductionmentioning
confidence: 99%
“…However, it is not clear whether this top-down signal reflects actual linguistic content, as opposed to less specific processes like attentional modulation (McQueen et al, 2016). Getz and Toscano (2019) overcame this using the N1 EEG component. The N1 is an early component which reflects a number of low-level auditory processes.…”
Section: Third We Ask If Speech Perception Is Accomplished Entirely mentioning
confidence: 99%
“…Second, do listeners maintain a only a veridical (bottom-up) representation of the input (Firestone and Scholl, 2016, Norris et al, 2000, Lupyan and Clark, 2015 or is perception biased by top-down expectations (McMurray and Jongman, 2011, Getz and Toscano, 2019, Broderick et al, 2019? The present study uses a novel electroencephalography (EEG) paradigm to address three questions concerning the dynamics of processing at different levels of speech perception.…”
Section: Introductionmentioning
confidence: 99%
“…However, the phonetic-phonological correspondences differ across languages (e.g., Lisker & Abramson, 1964), and thus the informative variability in talker-specific phonetic idiosyncrasies may be more opaque to listeners when they are identifying foreign-language voices. Higher-level linguistic structure, such as words, guides both the perception and interpretation of ambiguous phonetic information (Getz & Toscano, 2019;Samuel, 1997Samuel, , 2001) and can facilitate phonetic processing even in an unfamiliar language (Samuel & Frost, 2015). Correspondingly, by providing listeners with higher-level linguistic representations through which they can interpret the ambiguous phonetics of foreign language speech, known lexical content may give listeners a scaffold upon which they can extract more information about talker-specific phonetic variation and thus facilitate foreignlanguage talker identification.…”
Section: The Role Of Familiar Words and Higher-level Linguistic Unitsmentioning
confidence: 99%