“…Automated summary evaluators (ASEs) leverage advanced natural language processing tools and techniques to assess linguistic features of students' written responses (Allen, Jacovina, & McNamara, 2016; Passonneau et al, 2018; Strobl et al, 2019). ASEs such as Summary Street (Wade‐Stein & Kintsch, 2004), Online Summary Assessment and Feedback System (Sung, Liao, Chang, Chen, & Chang, 2016), crowd‐source summary evaluation (Li, Cai, & Graesser, 2016, 2018), ROUGE (Lin, 2004), SEMILAR (Rus, D'Mello, Hu, & Graesser, 2013) and PryEval (Gao, Warner, & Passonneau, 2019) use hundreds of descriptive linguistic indices related to word‐level (e.g., lexical diversity), sentence‐level (e.g., syntactic complexity) and document‐level (e.g., cohesion from sentence to sentence) qualities of the summary to examine the quality of writing and to drive feedback to help students improve their summary writing skills. However, feedback in ASEs tend to focus on the act of summarizing, rather than comprehension (Sung et al, 2016).…”