2019
DOI: 10.3758/s13414-018-01651-x
|View full text |Cite
|
Sign up to set email alerts
|

Attentional resources contribute to the perceptual learning of talker idiosyncrasies in audiovisual speech

Abstract: To recognize audiovisual speech, listeners evaluate and combine information obtained from the auditory and visual modalities. Listeners also use information from one modality to adjust their phonetic categories to a talker's idiosyncrasy encountered in the other modality. In this study, we examined whether the outcome of this cross-modal recalibration relies on attentional resources. In a standard recalibration experiment in Experiment 1, participants heard an ambiguous sound, disambiguated by the accompanying… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 81 publications
0
1
0
Order By: Relevance
“…Critically, this design required the use of a 4‐alternative forced choice (4AFC) task, wherein listeners were required to categorize both the final segment of the context item and the first segment of the target item. It is possible that this cognitively demanding 4AFC task obscured potential LCfC effects, especially since previous work suggests that some perceptual effects in speech processing may be attenuated when task demands are heightened; for example, increasing cognitive load can attenuate cross‐modal phonetic recalibration effects driven by visual (lipreading) information (Jesse & Kaplan, 2019), and shifting attentional resources away from the speech signal can extinguish the influence of lexical knowledge on phonetic retuning (Samuel, 2016). In addition to simple complexity/demand issues, note that McQueen and colleagues in various papers have argued that the Ganong effect results from postperceptual bias, while accepting that CfC is a perceptual‐level effect.…”
Section: Accounting For Null Resultsmentioning
confidence: 99%
“…Critically, this design required the use of a 4‐alternative forced choice (4AFC) task, wherein listeners were required to categorize both the final segment of the context item and the first segment of the target item. It is possible that this cognitively demanding 4AFC task obscured potential LCfC effects, especially since previous work suggests that some perceptual effects in speech processing may be attenuated when task demands are heightened; for example, increasing cognitive load can attenuate cross‐modal phonetic recalibration effects driven by visual (lipreading) information (Jesse & Kaplan, 2019), and shifting attentional resources away from the speech signal can extinguish the influence of lexical knowledge on phonetic retuning (Samuel, 2016). In addition to simple complexity/demand issues, note that McQueen and colleagues in various papers have argued that the Ganong effect results from postperceptual bias, while accepting that CfC is a perceptual‐level effect.…”
Section: Accounting For Null Resultsmentioning
confidence: 99%