An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students’ essays from their large-scale assessments. Specifically, we studied nine categories of Coh-Metrix features for developing prompt-specific AES scoring models for our sample. We developed the models by capitalizing on the nine features’ informativeness as a function of dimensionality reduction. We used a three-staged scoring framework. The machine scores were validated against a “gold standard” of ratings, that is, those assigned by two human raters. The nine language features reliably captured the construct of the students’ writing quality. We performed a secondary analysis to see how the scoring models performed in relation to other, already established AES systems, and there was no systematic pattern of scoring discrepancy. However, for essays with widely divergent human ratings, the scoring models were disadvantaged owing to the inherent unreliability of the human scores.