2018
DOI: 10.1075/jslp.00006.bli
|View full text |Cite
|
Sign up to set email alerts
|

Computer-assisted visual articulation feedback in L2 pronunciation instruction

Abstract: Language learning is a multimodal endeavor; to improve their pronunciation in a new language, learners access not only auditory information about speech sounds and patterns, but also visual information about articulatory movements and processes. With the development of new technologies in computer-assisted pronunciation training (CAPT) comes new possibilities for delivering feedback in both auditory and visual modalities. The present paper surveys the literature on computer-assisted visual articulation feedbac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 35 publications
(15 citation statements)
references
References 42 publications
1
14
0
Order By: Relevance
“…These theories lead to our design of providing both audio and visual information as feedback. While previous methods show improvements in L2 learners' pronunciation abilities through training with articulatory animations [3,36], our user study shows that with exaggerated feedback from both modalities, the learning efficiency can be further improved compared to providing only exaggerated audio feedback, which supports the hypotheses of the theories. Exaggerated Feedback.…”
Section: Learning Theoriessupporting
confidence: 75%
“…These theories lead to our design of providing both audio and visual information as feedback. While previous methods show improvements in L2 learners' pronunciation abilities through training with articulatory animations [3,36], our user study shows that with exaggerated feedback from both modalities, the learning efficiency can be further improved compared to providing only exaggerated audio feedback, which supports the hypotheses of the theories. Exaggerated Feedback.…”
Section: Learning Theoriessupporting
confidence: 75%
“…Prior studies have largely used a more passive type of perceptual training, where participants listen to stimuli and make discrimination or categorization judgments. Little is known about the efficacy of using ultrasound videos for training (e.g., Abel et al, 2015;Bliss et al, 2017Bliss et al, , 2018, and the results presented here indicate that further research is necessary to assess their effectiveness as a training method. The results here may also be due to the relative salience of the trained segments, which was purposefully emphasized in the present experimental design.…”
Section: Discussionmentioning
confidence: 83%
“…Though its visual feedback was successfully used to detect segmental features in minimal studies (Kartushina, Hervais-Adelman, Frauenfelder, & Golestani, 2015;Olson, 2014;Wulandari, Rodliyah, & Fatimah, 2016), speech analysis technology is usually advocated to improve suprasegmentals (Chun, 2002;Farida & Said, 2016;Gut, 2013;Hincks, 2015;Levis & Pickering, 2004;Li, 2019;Pennington & Rogerson-Revell, 2019). This is because they cannot be seen in the same way that the articulation of the segmentals is often observable, through manipulating the articulatory apparatus (Bliss, Abel, & Gick, 2018). However, a non-specialist "can interpret a pitch contour representing intonation more intuitively than a spectrogram, making visual feedback a more natural fit for teaching intonation" (Imber, Maynard, & Parker, 2017, p. 196).…”
Section: Speech Analysis Technologymentioning
confidence: 99%