2019
DOI: 10.1002/tesq.531
|View full text |Cite
|
Sign up to set email alerts
|

The Effect of Imagery and On‐Screen Text on Foreign Language Vocabulary Learning From Audiovisual Input

Abstract: In recent years, an increasing number of studies have focused on learning vocabulary from audiovisual input. They have shown that learners can pick up new words incidentally when watching TV (Peters & Webb, 2018; Rodgers & Webb, 2019). Research has also shown that on‐screen text (first language or foreign language subtitles) might increase learning gains (Montero Perez, Peters, Clarebout, & Desmet, 2014; Winke, Gass, & Sydorenko, 2010). Learning is sometimes explained in terms of the beneficial role of on‐scre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

13
102
3
4

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 122 publications
(122 citation statements)
references
References 43 publications
13
102
3
4
Order By: Relevance
“…According to the Cognitive Theory of Multimedia Learning (Mayer, 2009) verbal and visual representations take advantage of the full capacity of humans for processing information and building connections (Scheiter, Wiebe, & Holsanova, 2009), and comprehension is enhanced as a result (Mayer, 2005). Along the same lines, Mayer, Lee and Peebles (2014) assert that learners benefit from audio plus video because the images in the video help even complete beginner learners access word meaning, guess unknown words from context and enhance the meaning of partially known words (Peters, 2019;Webb & Rodgers, 2009b). Moreover, Michas and Berry (2000) found that participants perform better when they learn from video or pictures than from single-channeled texts.…”
Section: The Role Of Imagery In Viewing Comprehensionmentioning
confidence: 97%
See 1 more Smart Citation
“…According to the Cognitive Theory of Multimedia Learning (Mayer, 2009) verbal and visual representations take advantage of the full capacity of humans for processing information and building connections (Scheiter, Wiebe, & Holsanova, 2009), and comprehension is enhanced as a result (Mayer, 2005). Along the same lines, Mayer, Lee and Peebles (2014) assert that learners benefit from audio plus video because the images in the video help even complete beginner learners access word meaning, guess unknown words from context and enhance the meaning of partially known words (Peters, 2019;Webb & Rodgers, 2009b). Moreover, Michas and Berry (2000) found that participants perform better when they learn from video or pictures than from single-channeled texts.…”
Section: The Role Of Imagery In Viewing Comprehensionmentioning
confidence: 97%
“…However, those findings raise the question of whether coverage figures derived from reading or listening research can be transferred to TV viewing because audio-visual input contains imagery, which could provide support to grasp the meaning of unknown words (Peters, 2019) and thus aid comprehension (Rodgers, 2018). To our knowledge, no study has investigated the relationship between lexical coverage and viewing comprehension.…”
Section: Rationale and Research Questionsmentioning
confidence: 99%
“…However, whether it is captions or subtitles that are more useful in general in an audio-visual context is still a matter of debate, with studies showing mixed results depending on what aspect of the language is being assessed and on learners' proficiency. Captions, in particular, have been shown to aid in various aspects of language learning such as written form recognition (Sydorenko, 2010), aural form recognition (Markham, 1999), form-meaning connection (Winke et al, 2010), meaning recall (Peters, 2019), and speech perception (Mitterer & McQueen, 2009) and segmentation (Charles & Trenkic, 2015). The majority of studies on comprehension concur, however, that subtitles (in the viewers' native language) facilitate understanding of the content better than captions (Bianchi & Ciabattoni, 2008;Birulés-Muntané & Soto-Faraco, 2016;Latifi et al, 2011;Lwo & Lin, 2012;Markham et al, 2001;Markham & Peter, 2003), which is not surprising because reading the text in your native language logically facilitates understanding.…”
Section: Captions Subtitles and Proficiencymentioning
confidence: 99%
“…Surprisingly, fewer studies have focused on noncaptioned and nonsubtitled audiovisual input (see Webb, 2018, andWebb, 2019, for two exceptions). Even though the majority of studies have been conducted with university students (e.g., Montero Perez et al, 2014;Peters & Webb, 2018;Rodgers & Webb, 2019;Winke et al, 2010), there has been an increasing number of studies with learners in secondary schools (e.g., Peters, 2019;Peters et al, 2016;Pujadas & Muñoz, 2019;Suárez & Gesa, 2019), primary schools (e.g., Muñoz, 2017), and even preschool children (e.g., Samudra et al, 2019). Research has also moved from using short, educational clips to using full-length TV programs (e.g., Peters & Webb, 2018) and even complete TV shows (e.g., Pujadas & Muñoz, 2019;Rodgers & Webb, 2019).…”
Section: Language Learning From Multimodal Inputmentioning
confidence: 99%
“…combination of pictorial information (static or dynamic) and verbal input (spoken and/or written). In studies on multimodal input, learning gains are often explained in terms of the visual support (Peters, 2019;Peters & Webb, 2018;Rodgers, 2018;Rodgers & Webb, 2019;. However, most of these input types combine not two but three sources of input: (1) pictorial information, (2) written verbal information in captions or subtitles, or in written text, and (3) aural verbal input.…”
mentioning
confidence: 99%