2018
DOI: 10.1016/j.cognition.2018.03.018
|View full text |Cite
|
Sign up to set email alerts
|

Learning to recognize unfamiliar talkers: Listeners rapidly form representations of facial dynamic signatures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

7
13
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(20 citation statements)
references
References 97 publications
7
13
0
Order By: Relevance
“…These aftereffects indicated a shift in the visual phonetic categories to include the ambiguous speech gesture into the category intended by the speaker. These results fit within a larger literature showing that perceivers are sensitive to visual idiosyncrasies (Heald & Nusbaum, 2014;Yakel, Rosenblum, & Fortier, 2000) and learn about them (Jesse & Bartoli, 2018;van der Zande, Jesse, & Cutler, 2013). Audiovisual speech thus allows listeners to perceive speech more reliably, in that it disambiguates the currently experienced speech but also facilitates future speech perception by recalibrating phonetic categories so that listeners can accommodate to a talker.…”
supporting
confidence: 85%
See 1 more Smart Citation
“…These aftereffects indicated a shift in the visual phonetic categories to include the ambiguous speech gesture into the category intended by the speaker. These results fit within a larger literature showing that perceivers are sensitive to visual idiosyncrasies (Heald & Nusbaum, 2014;Yakel, Rosenblum, & Fortier, 2000) and learn about them (Jesse & Bartoli, 2018;van der Zande, Jesse, & Cutler, 2013). Audiovisual speech thus allows listeners to perceive speech more reliably, in that it disambiguates the currently experienced speech but also facilitates future speech perception by recalibrating phonetic categories so that listeners can accommodate to a talker.…”
supporting
confidence: 85%
“…Lexical information, however, recalibrates visual phonetic categories directly, and not indirectly via the recalibration of auditory categories (van der Zande et al, 2013). Listeners also use talkers' idiosyncratic realizations of visual speech to form representations of these talkers' identities (Jesse & Bartoli, 2018). These representations allow listeners to recognize talkers even from new utterances.…”
Section: Discussionmentioning
confidence: 99%
“…Visual interference and/or "McGurk" illusory effects (fusion of conflicting auditory and visual phonetic content; McGurk & MacDonald, 1976) are observed with point-light stimuli (Rosenblum & Saldaña, 1996), when visual detail is removed by severe blurring or spatial quantization (MacDonald, Andersen, & Bachmann, 2000;Thomas & Jordan, 2002), and when visual speech is observed from distances up to 20 meters (Jordan & Sergeant, 2000) or in nonfoveal regions of the visual field (Paré, Richler, ten Hove, & Munhall, 2003). Moreover, observers can identify particular talkers -and even learn the identity of particular talkers -from dynamic visual speech information alone (Girges, Spencer, & O'Brien, 2015;Jesse & Bartoli, 2018;Rosenblum, Niehus, & Smith, 2007;Rosenblum, Smith, Nichols, Hale, & Lee, 2006;Rosenblum et al, 2002), and the natural movement of a talker's head alone provides a significant boost to audiovisual speech recognition in noise (K. G. .…”
Section: The Role Of Multisensory Temporal Covariation In Audiovisualmentioning
confidence: 99%
“…Subsequent experiments from this study also revealed that this sort of talker learning could be extended to the more difficult task of identifying four talkers. Furthermore, these researchers found that their talker learning effects were not driven by differences in talker sex, (Jesse & Bartoli, 2018;Jesse & Saba 2017). These results are complementary to the sinewave speech results of Remez and his colleagues (i.e., Sheffert et al, 2002) who (as noted above) showed that unfamiliar talkers could be learned and subsequently recognized in sinewave speech.…”
Section: Introductionmentioning
confidence: 98%
“…More recent research with point-light speech now demonstrates that these stimuli can also support talker learning (Jesse & Bartoli, 2018). In a recent study, participants were trained to identify two novel talkers in point-light speech.…”
Section: Introductionmentioning
confidence: 99%