PurposeThe aim of this study is to explore students' expectations and perceived effectiveness of computer-assisted review tools, and the differences in reliability and validity between human evaluation and automatic evaluation, to find a way to improve students' English writing ability.Design/methodology/approachBased on the expectancy disconfirmation theory (EDT) and Intelligent Computer-Assisted Language Learning (ICALL) theory, an experiment is conducted through the observation method, semistructured interview method and questionnaire survey method. In the experiment, respondents were asked to write and submit four essays on three online automated essay evaluation (AEE) systems in total, one essay every two weeks. Also, two teacher raters were invited to score the first and last papers of each student. The respondents' feedbacks were investigated to confirm the effectiveness of the AEE system; the evaluation results of the AEE systems and teachers were compared; descriptive statistics was used to analyze the experimental data.FindingsThe experiment revealed that the respondents held high expectations for the computer-assisted evaluation tools, and the effectiveness of computer scoring feedback on students was higher than that of teacher scoring feedback. Moreover, at the end of the writing project, the students' independent learning ability and English writing ability were significantly improved. Besides, there was a positive correlation between students' initial expectations of computer-assisted learning tools and the final evaluation of learning results.Originality/valueThe innovation lies in the use of observation methods, questionnaire survey methods, data analysis, and other methods for the experiment, and the combination of deep learning theory, EDT and descriptive statistics, which has particular reference value for future works.