Despite the importance of analytic text‐based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students’ analytic text‐based writing development and that are feasible to implement at scale. Automated essay‐scoring (AES) technologies offer one potential approach to increasing the feasibility of research in this area, provided that the scores yield information about substantive dimensions of writing aligned to new standards and are sensitive to variation in literacy instruction. The authors describe an approach to using AES technologies to provide information about students’ skills at marshaling text evidence in the upper elementary grades. Specifically, the authors examined 1,529 responses to a response‐to‐text assessment (RTA) from 65 fifth‐ and sixth‐grade language arts classrooms, from which the authors also collected data on instruction via logs, text‐based writing assignments, and surveys. Through correlational, univariate, and multilevel multivariate analyses, the authors found validity evidence supporting automated scoring of the RTA: The authors found close correspondence of human and AES scores, alignment of AES scores with components of instruction that the authors expected would predict variation in students’ writing quality, and association between AES scores and other expected measures of student achievement. These findings provide encouraging evidence that AES technologies as applied to the RTA can generate valid inferences about students’ ability to marshal text evidence in writing and, thus, could be a useful tool for advancing large‐scale writing research.