This research report examines the occurrence of listener visual cues during nonunderstanding episodes and investigates raters’ sensitivity to those cues. Nonunderstanding episodes (n = 21) and length-matched understanding episodes (n = 21) were taken from a larger dataset of video-recorded conversations between second language (L2) English speakers and a bilingual French-English interlocutor (McDonough, Trofimovich, Dao, & Abashidze, 2018). Episode videos were analyzed for the occurrence of listener visual cues, such as head nods, blinks, facial expressions, and holds. Videos of the listener’s face were manipulated to create three rating conditions: clear voice/clear face, distorted voice/clear face, and clear voice/blurred face. Raters in the same speech community (N = 66) were assigned to a video condition to assess the listener’s comprehension. Results revealed differences in the occurrence of listener visual cues between the understanding and nonunderstanding episodes. In addition, raters gave lower ratings of listener comprehension when they had access to the listener’s visual cues.
Visual cues may help second language (L2) speakers perceive interactional feedback and reformulate their nontarget forms, particularly when paired with recasts, as recasts can be difficult to perceive as corrective. This study explores whether recasts have a visual signature and whether raters can perceive a recast’s corrective function. Transcripts of conversations between a bilingual French–English interlocutor and L2 English university students ( n = 24) were analysed for recasts and noncorrective repetitions with rising and declarative intonation. Videos of those excerpts ( k = 96) were then analysed for the interlocutor’s provision of visual cues during the recast and repetition turns, including eye gaze duration, nods, blinks, and other facial expressions (frowns, eyebrow raises). The videos were rated by 96 undergraduate university students who were randomly assigned to three viewing conditions: clear voice/clear face, clear voice/blurred face, or distorted voice/clear face. Using a 100-millimeter scale with two anchor points (0% = he’s making a comment, and 100% = he’s correcting an error), they rated the corrective function of the interlocutors’ responses while their eye gaze was tracked. Raters reliably distinguished recasts from repetitions through their ratings (although they were generally low), but not through their eye gaze behaviors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.