The e‐rater® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the GRE® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e‐rater score. The 4 trait scores are word choice, grammatical conventions, fluency and organization, and content. First, confirmatory factor analysis supported this underlying structure. Next, several alternative ways of determining feature weights for trait scores were compared: weights based on regression parameters of the trait features on human scores, reliability of trait features, and loadings of features from factor analytic results. In addition, augmented trait scores, based on information from other trait scores, were also analyzed. The added value of all trait score variants was evaluated by comparing the ability to predict a particular trait score on one task from either the same trait score on the other task or the e‐rater score on the other task. Results supported the use of trait scores and are discussed in terms of their contribution to the construct validity of e‐rater as an alternative essay scoring method.