Listener-based ratings have become a prominent means of defining second language (L2) users’ global speaking ability. In most cases, local listeners are recruited to evaluate speech samples in person. However, in many teaching and research contexts, recruiting local listeners may not be possible or advisable. The goal of this study was to hone a reliable method of recruiting listeners to evaluate L2 speech samples online through Amazon Mechanical Turk (AMT) using a blocked rating design. Three groups of listeners were recruited: local laboratory raters and two AMT groups, one inclusive of the dialects to which L2 speakers had been exposed and another inclusive of a variety of dialects. Reliability was assessed using intraclass correlation coefficients, Rasch models, and mixed-effects models. Results indicate that online ratings can be highly reliable as long as appropriate quality control measures are adopted. The method and results can guide future work with online samples.
The accurate identification of likely segmental pronunciation errors produced by nonnative speakers of English is a longstanding goal in pronunciation teaching. Most lists of pronunciation errors for speakers of a particular first language (L1) are based on the experience of expert linguists or teachers of English as a second language (ESL) and English as a foreign language (EFL). Such lists are useful, but they are also subject to blind spots for less noticeable errors while suggesting that other more noticeable errors are more important. This exploratory study tested whether using a database of read sentences would reveal recurrent errors that had been overlooked by expert opinions. We did a systematic error analysis of advanced L1 Arabic learners of English ( n = 4) using L2 Arctic, a publicly available collection of 1,132 phonetically-balanced English sentences read aloud by 24 speakers of six language backgrounds. To test whether the database was useful for pronunciation error identification, we analysed Arabic speakers’ sentence readings ( n = 599), which were annotated in Praat for pronunciation deviations from General American English. The findings give an empirically supported description of persistent pronunciation errors for Arabic learners of English. Although necessarily limited in scope, the study demonstrates how similar datasets can be used regardless of the L1 being investigated. The discussion of errors in pronunciation in terms of their functional loads (Brown, 1988) suggests which persistent errors are likely to be important for classroom attention, helping teachers focus their limited classroom time for optimal learning.
No abstract
Previous research suggests that repeated words in discourse are durationally shortened in comparison to the first mention, particularly when the words describe the same scene in a story. However, previous methods often relied on reading passages, which may be challenging to second language (L2) speakers or films, which require significant cultural comprehension. These methods may provide different findings from the accessibility of discourse referents in spontaneous speech using a picture narrative. This pilot study used a multi-scene picture narrative to elicit word reduction in spontaneous discourse. L2 English speakers with Korean, Chinese, Vietnamese, or Spanish as a first language narrated a story using a sequence of eight pictures/scenes about two strangers who collide and accidentally pick up each other's suitcase. Productions, when compared to native speakers of English, showed similar patterns of repeated word reduction. The results suggest that durations typically reset to full duration when words are repeated in different scenes, but they reduce within scenes. Results also suggest that the degree of second mention reductions vary modestly by first language. The results also show that a picture narrative was a promising method to elicit 2nd mention reductions in spontaneous speech and demonstrated durational sensitivity to scene changes.
Previous research on visualization of speech segments for pronunciation training suggests that such learning results in improved segmental production (e.g., Katrushina et al., 2015; Olson, 2014; Patten & Edmonds, 2015). However, investigation into real-time formant visualization for L2 vowel production training has been limited to either training a single or a pair of vowels (Carey, 2004; Sakai, 2016) and to examination of improvement on trained items only (Katrushina et al., 2015). This project investigates the effects of real-time formant visualization on production training for eight L2 vowels in trained and untrained environments as well as spontaneous speech. L2 learners (n = 11) participated in nine 30-minute training sessions, during which they used a formant visualization system to practice their vowel production. A control group (n = 8) was involved in audio-only vowel production training. Pre-test, post-test, and delayed post-test design was used, and pronunciation improvement was analyzed acoustically using Mahalanobis distance and mixed-effects modeling. The use of real-time visual acoustic feedback resulted in retained improvement in vowel quality in both trained and untrained items than audio-only training. Spontaneous speech was not improved. The findings suggest that this system could be used as an effective pedagogical tool for L2 learners.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.