With their realistic animation, complex scenarios and impressive interactivity, computer simulation games might be able to provide context-rich, cognitively engaging virtual environments for language learning. However, simulation games designed for L2 learners are in short supply. As an alternative, could games designed for the mass-market be enhanced with support materials to allow students to enter and make use of them for learning? This classroom-based investigation looked into whether the best selling game The SIMs™ could be rendered pedagogically beneficial to university-level ESL learners by means of supplementary materials designed to meet Chapelle's (2001) criteria for CALL task appropriateness. The mixed-methods study found statistically significant improvements in vocabulary knowledge, as well as a generally positive reaction to the modifications among users.
An increasing number of studies on the use of tools for automated writing evaluation (AWE) in writing classrooms suggest growing interest in their potential for formative assessment. As with all assessments, these applications should be validated in terms of their intended interpretations and uses. A recent argument-based validation framework outlined inferences that require backing to support integration of one AWE tool, Criterion, into a college-level English as a Second Language (ESL) writing course. The present research appraised evidence for the assumptions underlying two inferences in this argument. In the first of two studies, we assessed evidence for the evaluation inference, which includes the assumption that Criterion provides students with accurate feedback. The second study focused on the utilisation inference involving the assumption that Criterion feedback is useful for students to make decisions about revisions. Results showed accuracy varied considerably across error types, as did students' abilities to use Criterion feedback to correct written errors. The findings can inform discussion of whether and how to integrate the use of AWE into writing classrooms while raising important questions regarding standards for validation of AWE as formative assessment, Criterion developers' approach to accuracy, and instructors' assumptions about the underlying purposes of AWE-based writing activities.
KeywordsAcademic writing, argument-based validation, automated writing evaluation, ESL, formative assessment
AWE FOR FORMATIVE ASSESSMENT 2 AbstractAn increasing number of studies on the use of tools for automated writing evaluation (AWE) in writing classrooms suggests growing interest in their potential for formative assessment. As with all assessments, these applications should be validated in terms of their intended interpretations and uses (Kane, 2012). A recent argument-based validation framework outlined inferences that require backing to support integration of one AWE tool, Criterion, into a college-level ESL writing course. The present research appraised evidence for the assumptions underlying two inferences in this argument. In the first of two studies, we assessed evidence for the evaluation inference, which includes the assumption that Criterion provides students with accurate feedback. The second study focused on the utilization inference involving the assumption that Criterion feedback is useful for students to make decisions about revisions. Results showed accuracy varied considerably across error types, as did students' abilities to use Criterion feedback to correct written errors. The findings can inform discussion of whether and how to integrate the use of AWE into writing classrooms while raising important questions regarding standards for validation of AWE as formative assessment, Criterion developers' approach to accuracy, and instructors' assumptions about the underlying purposes of AWE-based writing activities.
Assessment for learning (AfL) seeks to support instruction by providing information about students' current state of learning, the desired end state of learning, and ways to close the gap.AfL of second-language (L2) writing faces challenges insofar as feedback from instructors tends to focus on written products while neglecting most of the processes that gave rise to them, such as planning, formulation, and evaluation. Meanwhile, researchers studying writing processes have been using keystroke logging (KL) and eye-tracking (ET) to analyze and visualize process engagement. This study explores whether such technologies can support more meaningful AfL of L2 writing. Two Chinese L1 students studying at a U.S. university who served as case studies completed a series of argumentative writing tasks while a KL-ET system traced their processes and then produced visualizations that were used for individualized tutoring. Data sources included the visualizations, tutoring-session transcripts, the participants' assessed final essays, and written reflections. Findings showed the technologies, in combination with the assessment dialogues they facilitated, made it possible to (1) position the participants in relation to developmental models of writing; (2) identify and address problems with planning, formulation, and revision; and (3) reveal deep-seated motivational issues that constrained the participants' learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.