Abstract:Contextualized, meaning-based interaction in the foreign language is widely recognized as crucial for second language acquisition. Correspondingly, current exercises in foreign language teaching generally require students to manipulate both form and meaning. For Intelligent Language Tutoring Systems to support such activities, they thus must be able to evaluate the appropriateness of the meaning of a learner response for a given exercise. We discuss such a content-assessment approach, focusing on reading comprehension exercises. We pursue the idea that a range of simultaneously available representations at different levels of complexity and linguistic abstraction provide a good empirical basis for content assessment. We show how an annotation-based NLP architecture implementing this idea can be realized and that it successfully performs on a corpus of authentic learner answers to reading comprehension questions. To support comparison and sustainable development on content assessment, we also define a general exchange format for such exercise data.Keywords: content assessment, shallow semantic analysis, meaning comparison, textual entailment, intelligent computer-assisted language learning, ICALL, intelligent tutoring systems Biographical notes: Detmar Meurers is a professor of Computational Linguistics at the University of Tübingen, Germany. Previously he was an associate professor at The Ohio State University, where he founded the ICALL research group focusing on intelligent tutoring systems, content assessment, and automatic input enhancement for language learners.Ramon Ziai is a PhD candidate at the Collaborative Research Center 833 at the University of Tübingen, Germany. His main research interest and background is in computational linguistics. More specifically, he is interested in shallow semantic analysis and the question of how ill-formed input can be processed.
While immediate feedback on learner language is often discussed in the Second Language Acquisition literature (e.g., Mackey 2006), few systems used in real-life educational settings provide helpful, metalinguistic feedback to learners.In this paper, we present a novel approach leveraging task information to generate the expected range of well-formed and ill-formed variability in learner answers along with the required diagnosis and feedback. We combine this offline generation approach with an online component that matches the actual student answers against the pre-computed hypotheses.The results obtained for a set of 33 thousand answers of 7th grade German high school students learning English show that the approach successfully covers frequent answer patterns. At the same time, paraphrases and meaning errors require a more flexible alignment approach, for which we are planning to complement the method with the CoMiC approach successfully used for the analysis of reading comprehension answers .
We discuss the collection and analysis of a cross-sectional and longitudinal learner corpus consisting of answers to reading comprehension questions written by adult second language learners of German. We motivate the need for such task-based learner corpora and identify the properties which make reading comprehension exercises a particularly interesting task. In terms of the creation of the corpus, we introduce the web-based WELCOME tool we developed to support the decentralized data collection and annotation of the richly structured corpus in real-life language teaching programs. On the analysis side, we investigate the binary and the complex content-assessment classification scheme used by the annotators and the inter-annotator agreement obtained for the current corpus snapshot, at the halfway point of our four-year effort. We present results showing that for such task-based corpora, meaning assessment can be performed with reasonable agreement and we discuss several sources of disagreement.
Intervention studies typically target a focused aspect of language learning that is studied over a relatively short time frame for a relatively small number of participants in a controlled setting. While for many research questions, this is effective, it can also limit the ecological validity and relevance of the results for real-life language learning. In educational science, large-scale randomized controlled field trials (RCTs) are seen as the gold standard method for addressing this challenge—yet they require intervention to scale to hundreds of learners in their varied, authentic contexts.We discuss the use of technology in support of large-scale interventions that are fully integrated in regular classes in secondary school. As an experimentation platform, we developed a web-based workbook to replace a printed workbook widely used in German schools. The web-based FeedBook provides immediate scaffolded feedback to students on form and meaning for various exercise types, covering the full range of constructions in the seventh-grade English curriculum.Following the conceptual discussion, we report on the first results of an ongoing, yearlong RCT. The results confirm the effectiveness of the scaffolded feedback, and the approach makes students and learning process variables accessible for the analysis of learning in a real-world context.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.