Although fluency is an important subconstruct of language proficiency, it has not received as much attention in L2 writing research as complexity and accuracy have, in part due to the lack of methodological approaches for the analysis of large datasets of writing-process data. This article presents a method of time-aligned keystroke logging and eye-tracking and reports an empirical study investigating L2 writing fluency through this method. Twenty-four undergraduate students at a private university in Turkey performed two writing tasks delivered through a web text editor with embedded keystroke logging and eye-tracking capabilities. Linear mixed-effects models were fit to predict indices of pausing and reading behaviors based on language status (L1 vs. L2) and linguistic context factors. Findings revealed differences between pausing and eye-fixation behavior in L1 and L2 writing processes. The article concludes by discussing the affordances of the proposed method from the theoretical and practical standpoints.
We describe the Multilanguage Written Picture Naming Dataset. This gives trial-level data and time and agreement norms for written naming of the 260 pictures of everyday objects that compose the colorized Snodgrass and Vanderwart picture set (Rossion & Pourtois in Perception, 33, 217–236, 2004). Adult participants gave keyboarded responses in their first language under controlled experimental conditions (N = 1,274, with subsamples responding in Bulgarian, Dutch, English, Finnish, French, German, Greek, Icelandic, Italian, Norwegian, Portuguese, Russian, Spanish, and Swedish). We measured the time to initiate a response (RT) and interkeypress intervals, and calculated measures of name and spelling agreement. There was a tendency across all languages for quicker RTs to pictures with higher familiarity, image agreement, and name frequency, and with higher name agreement. Effects of spelling agreement and effects on output rates after writing onset were present in some, but not all, languages. Written naming therefore shows name retrieval effects that are similar to those found in speech, but our findings suggest the need for cross-language comparisons as we seek to understand the orthographic retrieval and/or assembly processes that are specific to written output.
An increasing number of studies on the use of tools for automated writing evaluation (AWE) in writing classrooms suggest growing interest in their potential for formative assessment. As with all assessments, these applications should be validated in terms of their intended interpretations and uses. A recent argument-based validation framework outlined inferences that require backing to support integration of one AWE tool, Criterion, into a college-level English as a Second Language (ESL) writing course. The present research appraised evidence for the assumptions underlying two inferences in this argument. In the first of two studies, we assessed evidence for the evaluation inference, which includes the assumption that Criterion provides students with accurate feedback. The second study focused on the utilisation inference involving the assumption that Criterion feedback is useful for students to make decisions about revisions. Results showed accuracy varied considerably across error types, as did students' abilities to use Criterion feedback to correct written errors. The findings can inform discussion of whether and how to integrate the use of AWE into writing classrooms while raising important questions regarding standards for validation of AWE as formative assessment, Criterion developers' approach to accuracy, and instructors' assumptions about the underlying purposes of AWE-based writing activities. KeywordsAcademic writing, argument-based validation, automated writing evaluation, ESL, formative assessment AWE FOR FORMATIVE ASSESSMENT 2 AbstractAn increasing number of studies on the use of tools for automated writing evaluation (AWE) in writing classrooms suggests growing interest in their potential for formative assessment. As with all assessments, these applications should be validated in terms of their intended interpretations and uses (Kane, 2012). A recent argument-based validation framework outlined inferences that require backing to support integration of one AWE tool, Criterion, into a college-level ESL writing course. The present research appraised evidence for the assumptions underlying two inferences in this argument. In the first of two studies, we assessed evidence for the evaluation inference, which includes the assumption that Criterion provides students with accurate feedback. The second study focused on the utilization inference involving the assumption that Criterion feedback is useful for students to make decisions about revisions. Results showed accuracy varied considerably across error types, as did students' abilities to use Criterion feedback to correct written errors. The findings can inform discussion of whether and how to integrate the use of AWE into writing classrooms while raising important questions regarding standards for validation of AWE as formative assessment, Criterion developers' approach to accuracy, and instructors' assumptions about the underlying purposes of AWE-based writing activities.
In this paper, we introduce L2-ARCTIC, a speech corpus of non-native English that is intended for research in voice conversion, accent conversion, and mispronunciation detection. This initial release includes recordings from ten non-native speakers of English whose first languages (L1s) are Hindi, Korean, Mandarin, Spanish, and Arabic, each L1 containing recordings from one male and one female speaker. Each speaker recorded approximately one hour of read speech from the Carnegie Mellon University ARCTIC prompts, from which we generated orthographic and forced-aligned phonetic transcriptions. In addition, we manually annotated 150 utterances per speaker to identify three types of mispronunciation errors: substitutions, deletions, and additions, making it a valuable resource not only for research in voice conversion and accent conversion but also in computer-assisted pronunciation training. The corpus is publicly accessible at https://psi.engr.tamu.edu/l2-arctic-corpus/.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.