“…In WE-CBM, scoring primarily considers text production and accuracy, whereas in aLPA, we use a wide range of word-, sentence-, and discourse-level indices provided by automated text evaluation software to generate overall writing quality scores. Second, based on research demonstrating that automated text evaluation can generate writing quality scores that are useful for screening (Keller-Margulis, Mercer, & Matta, 2021;Mercer, Keller-Margulis, Faith, Reid, & Ochs, 2019;Wilson, 2018), we also anticipate that computer-based assessment will be necessary for writing samples to be scored and for such a system to be feasibly used by teachers. Third, given that we know that multiple, longer-duration writing samples will be necessary for reliability (Keller-Margulis et al, 2016), we anticipate that a reduced test frequency will be optimal, compared to typical CBM progress monitoring procedures of weekly assessments.…”