Although fluency is an important subconstruct of language proficiency, it has not received as much attention in L2 writing research as complexity and accuracy have, in part due to the lack of methodological approaches for the analysis of large datasets of writing-process data. This article presents a method of time-aligned keystroke logging and eye-tracking and reports an empirical study investigating L2 writing fluency through this method. Twenty-four undergraduate students at a private university in Turkey performed two writing tasks delivered through a web text editor with embedded keystroke logging and eye-tracking capabilities. Linear mixed-effects models were fit to predict indices of pausing and reading behaviors based on language status (L1 vs. L2) and linguistic context factors. Findings revealed differences between pausing and eye-fixation behavior in L1 and L2 writing processes. The article concludes by discussing the affordances of the proposed method from the theoretical and practical standpoints.
Assessment for learning (AfL) seeks to support instruction by providing information about students' current state of learning, the desired end state of learning, and ways to close the gap.AfL of second-language (L2) writing faces challenges insofar as feedback from instructors tends to focus on written products while neglecting most of the processes that gave rise to them, such as planning, formulation, and evaluation. Meanwhile, researchers studying writing processes have been using keystroke logging (KL) and eye-tracking (ET) to analyze and visualize process engagement. This study explores whether such technologies can support more meaningful AfL of L2 writing. Two Chinese L1 students studying at a U.S. university who served as case studies completed a series of argumentative writing tasks while a KL-ET system traced their processes and then produced visualizations that were used for individualized tutoring. Data sources included the visualizations, tutoring-session transcripts, the participants' assessed final essays, and written reflections. Findings showed the technologies, in combination with the assessment dialogues they facilitated, made it possible to (1) position the participants in relation to developmental models of writing; (2) identify and address problems with planning, formulation, and revision; and (3) reveal deep-seated motivational issues that constrained the participants' learning.
This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students’ development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight out of nine categories from first drafts to final drafts within the same papers (short-term effects). However, there is only one error reduction of statistical significance in the category of Run-on Sentence from the first drafts of the first paper to the first drafts of the subsequent papers for both levels of students (long-term effects). The findings from interviews with the participants reveal students’ perceptions of Criterion feedback and help us understand the feedback effect. Implications for a more effective use of AWE tools in ESL classrooms are discussed.
Thanks to natural language processing technologies, computer programs are actively being used not only for holistic scoring, but also for formative evaluation of writing. CyWrite is one such program that is under development. The program is built upon Second Language Acquisition theories and aims to assist ESL learners in higher education by providing them with effective formative feedback to facilitate autonomous learning and improvement of their writing skills. In this study, we focus on CyWrite's capacity to detect grammatical errors in student writing. We specifically report on (1) computational and pedagogical approaches to the development of the tool in terms of students' grammatical accuracy, and (2) the performance of our grammatical analyzer. We evaluated the performance of CyWrite on a corpus of essays written by ESL undergraduate students with regards to four types of grammatical errors: quantifiers, subjectverb agreement, articles, and run-on sentences. We compared CyWrite's performance at detecting these errors to the performance of a well-known commercially available AWE tool, Criterion. Our findings demonstrated better performance metrics of our tool as compared to Criterion, and a deeper analysis of false positives and false negatives shed light on how CyWrite's performance can be improved. Disciplines
Language learning potential: Evidence suggesting the feedback provided by the awe tool leads to students' noticing and focusing on the use of lexical bundles .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.