Previous comparisons of vocabulary uptake from captioned and uncaptioned audio-visual materials have almost consistently furnished evidence in favour of captioned materials. However, it is possible that many such comparative studies gave an advantage to the captioned input conditions by virtue of their use of written word prompts in the tests. The present study therefore examines whether aurally presented test prompts yield equally compelling evidence for the superiority of captioned over uncaptioned video. Intermediate EFL learners watched a ten-minute TED Talks video either with or without captions and were subsequently given a word recognition and a word meaning test, with half of the test prompts presented in print and the other half presented aurally. While the results of the word recognition test were inconclusive, the word meaning test yielded significantly better scores by the group that watched the captioned video. However, this was due entirely to their superior scores on the printed word prompts, not the aural ones. This suggests that evaluations of the benefits of captions for vocabulary acquisitions should take input-modality – test-modality congruency into account.
Previous comparisons of vocabulary uptake from captioned and uncaptioned audio-visual materials have almost consistently furnished evidence in favour of captioned materials. However, it is possible that many such comparative studies gave an advantage to the captioned input conditions by virtue of their use of written word prompts in the tests. The present study therefore examines whether aurally presented test prompts yield equally compelling evidence for the superiority of captioned over uncaptioned video. Intermediate EFL learners watched a ten-minute TED Talks video either with or without captions and were subsequently given a word recognition and a word meaning test, with half of the test prompts presented in print and the other half presented aurally. While the results of the word recognition test were inconclusive, the word meaning test yielded significantly better scores by the group that watched the captioned video. However, this was due entirely to their superior scores on the printed word prompts, not the aural ones. This suggests that evaluations of the benefits of captions for vocabulary acquisitions should take input-modality – test-modality congruency into account.
<p><b>Both first (L1) and second (L2) language speakers learn new meanings of known words through reading and listening. This learning results in changes in the mental lexicon, including adjustments to how existing (old) meanings are accessed. To investigate how the lexical-semantic space changes as a result of learning new meanings through encountering them multiple times in context, two studies were conducted. </b></p> <p>Study 1 was a conceptual replication and extension of Hulme et al. (2018). Fifty-two native English speakers read four short stories which contained critical words with invented secondary meanings (e.g., cake was given a new meaning a tribal headdress). The number of exposures to the critical words was manipulated (i.e., 2,4,6,8) within items and within participants. Explicit knowledge was assessed through a cued recall test of meaning and cued recall of form. The effect of acquisition of new unrelated meanings on the processing of the old meanings was operationalised using a semantic relatedness judgement (SRJ) task. The results of the immediate recall tests of form and meaning corroborated Hulme et al.’s findings: multiple encounters with the secondary meanings produced substantial explicit knowledge, and more explicit knowledge of the meaning was produced with more exposures. An inhibitory effect in the SRJ task was found for the trained (but not the untrained) targets, suggesting competition between newly acquired and well-established meanings. A stronger inhibitory effect was found when the exposures were lower (2 and 4) than when the exposures were higher (6 and 8), indicating that number of exposures modulates the degree of competition.</p> <p>Study 2 investigated how secondary meanings of known words are learned from listening. L1 and L2 adult English speakers (56 participants in each group) listened to recorded stories in which context variability (i.e., varied or repeated contexts) was manipulated. The measures of learning were the same as in Study 1, but a cross-modal version of the SRJ task (where critical words were presented auditorily and meaning probes visually) was used. Both L1 and L2 participants learned the new meanings through listening to the short stories, regardless of context variability conditions. L1 participants performed significantly better than their L2 counterparts, and participants who scored higher on a comprehension test scored significantly better on both tasks. Results from the SRJ task showed that encountering new (unrelated) meanings of known words during listening created a perturbation effect ( on the processing of previously known meanings. This effect was observed on both response accuracy and reaction times measures. Overall, the results suggest that changes were taking place in the lexical-semantic space: semantic competition between the old and new unrelated meanings slowed down the recognition of old meanings and created more errors when judging the old meanings. </p> <p>This research builds a more detailed picture of changes in the mental lexicon resulting from contextual learning of new unrelated meanings, and whether these changes are affected by number of exposures, context variability, comprehension, and language group.</p>
Both first (L1) and second (L2) language speakers learn new meanings of known words through reading and listening. This learning results in changes in the mental lexicon, including adjustments to how existing (old) meanings are accessed. To investigate how the lexical-semantic space changes as a result of learning new meanings through encountering them multiple times in context, two studies were conducted. Two text-related factors were investigated: number of exposures (Study 1) and context variability (Study 2). In this seminar, I will detail how the two studies were conducted, share some of the main findings and provide suggestions for future research.
Language researchers who investigate language processing, representation, and production rely heavily on behavioural data to test research hypotheses. This normally involves presenting a research participant with some sort of linguistic stimuli and then measuring the speed and/or accuracy of the participant’s response. The cognitive processes associated with these decisions occur very rapidly, requiring precise timing down to the millisecond. It may thus come as a surprise to hear that this sort of data can now be collected from participants over the internet, rather than in a lab. This seminar will trace the history and status of these methods and then share how four members of LALS have used this type of data in their research, theses, and teaching. The speakers will detail the benefits and drawbacks of these approaches and provide suggestions for how future researchers can best explore this new frontier of data collection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.