This paper undertakes a review of the literature on writing cognition, writing instruction, and writing assessment with the goal of developing a framework and competency model for a new approach to writing assessment. The model developed is part of the Cognitively Based Assessments of, for, and as Learning (CBAL) initiative, an ongoing research project at ETS intended to develop a new form of kindergarten through Grade 12 (K-12) assessment that is based on modern cognitive understandings; built around integrated, foundational, constructedresponse tasks that are equally useful for assessment and for instruction; and structured to allow multiple measurements over the course of the school year. The model that emerges from a review of the literature on writing places a strong emphasis on writing as an integrated, socially situated skill that cannot be assessed properly without taking into account the fact that most writing tasks involve management of a complex array of skills over the course of a writing project, including language and literacy skills, document-creation and document-management skills, and critical-thinking skills. As such, the model makes strong connections with emerging conceptions of reading and literacy, suggesting an assessment approach in which writing is viewed as calling upon a broader construct than is usually tested in assessments that focus on relatively simple, on-demand writing tasks.
AbsiraclSyntactic Island constraints are generally viewed äs paradigmatic evidence for the autonomy of syntax. However, the existence ofexceptions has been known for some time. Close examination reveals that these exceptional phenomena resist purely syntactic explanations, requiring insteadan account in terms of semanüc relations like attribution, various kinds of semantic framing ejfects, and discourse variables like topic and focus. It will be argued that the syntactic and extrasyntactic factors which limit extraction can be subsumed into a general account based on a cognitive theory of attention.According to the analysis presented below, extraction phenomena represent a Situation in which the language user must attend simultaneously to two parts of the syntactic structure; a Situation which strains the limited working memory available for automatic syntactic processing. Long-range extraction takes place under conditions which reduce that processing strain, that is, when both the extracted element and the matrix for extraction command attention anyway. In other words, the factors which control the distribution of long-range extraction do so indirectly through their impact on the distribution of attention.
Under the framework of the argumentation scheme theory (Walton, 1996), we developed annotation protocols for an argumentative writing task to support identification and classification of the arguments being made in essays. Each annotation protocol defined argumentation schemes (i.e., reasoning patterns) in a given writing prompt and listed questions to help evaluate an argument based on these schemes, to make the argument structure in a text explicit and classifiable. We report findings based on an annotation of 600 essays. Most annotation categories were applied reliably by human annotators, and some categories significantly contributed to essay score. An NLP system to identify sentences containing scheme-relevant critical questions was developed based on the human annotations.
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the e‐rater® automatic essay scoring system to measure product features (measurable traits of the final written text) and features extracted from keystroke logs to measure process features (measurable features of the writing process). These techniques are applied to student essays written during large‐scale pilot administrations of writing assessments developed for ETS's CBAL™ research initiative. The design makes it possible to explore the factor structures of these product and process features and to examine how well they generalize beyond a single test session to predict underlying traits such as writing ability and reading level. Three product factors are identified, connected to fluency, accuracy, and content. The process factors are identified, corresponding to hesitancy behaviors, editing behaviors, and burst span (the extent to which text is produced in long bursts with only short internal pauses). The results suggest that writing process and product features have stable factor structures that generalize to predict writing quality, though there are some genre‐ or task‐specific differences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.