Under the framework of the argumentation scheme theory (Walton, 1996), we developed annotation protocols for an argumentative writing task to support identification and classification of the arguments being made in essays. Each annotation protocol defined argumentation schemes (i.e., reasoning patterns) in a given writing prompt and listed questions to help evaluate an argument based on these schemes, to make the argument structure in a text explicit and classifiable. We report findings based on an annotation of 600 essays. Most annotation categories were applied reliably by human annotators, and some categories significantly contributed to essay score. An NLP system to identify sentences containing scheme-relevant critical questions was developed based on the human annotations.
Written communication is considered one of the most critical competencies for academic and career success, as evident in surveys of stakeholders from higher education and the workforce. Emphasis on writing skills suggests the need for next‐generation assessments of writing proficiency to inform curricular and instructional improvement. This article presents a comprehensive review of definitions of writing proficiency from key higher education and workforce frameworks; the strengths and weaknesses of existing assessments; and challenges related to designing, implementing, and interpreting such assessments. Consistent with extant frameworks, we propose an operational definition including 4 strands of skills: (a) social and rhetorical knowledge, (b) domain knowledge and conceptual strategies, (c) language use and conventions, and (d) the writing process. Measuring these aspects of writing requires multiple assessment formats (including selected‐response [SR] and constructed‐response [CR] tasks) to balance construct coverage and test reliability. Next‐generation assessments should balance authenticity (e.g., realistic writing tasks) and psychometric quality (e.g., desirable measurement properties), while providing institutions and faculty with actionable data. The review and operational definition presented here should serve as an important resource for institutions that seek to either adopt or design an assessment of students' writing proficiency.
This paper presents a framework intended to link the following assessment development concepts into a systematic framework: evidence‐centered design (ECD), scenario‐based assessment (SBA), and assessment of, for, and as learning. The context within which we develop this framework is the English language arts (ELA) for K‐12 students, though the framework could easily be applied to cover reading, writing, and critical thinking skills from pre‐K through college. Central to the framework is the concept of a key practice, drawn from constructivist learning theory, which emphasizes the purposeful social context within which skills are recruited and organized to carry out complex literacy tasks. We argue that key practices provide a key link between existing CBAL™ ELA learning progressions (defined as part of a student model for literacy skills) and the structure of well‐designed SBAs. This structure enables us to design assessments that model a key practice, supporting the systematic creation of task sequences that can be used to support both instruction and assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.