The study we present here aims to explore the possibilities that new Artificial Intelligence tools offer teachers to design assessments to improve the written proficiency of students of English as a Foreign Language (the participants in this study have predominantly Spanish as their L1) in a University English Language Course with CEFR B2 objective. The group we are going to monitor is, as far as the Spanish university system is concerned, on average: more than sixty students, with diverse backgrounds and unequal proficiency in English. In such conditions, the teacher must be very attentive to meet the needs of all students/learners and, at the same time, keep track of successes and failures in the designed study plans. One of the most notable reasons for subject/class failure and dropout, in a scenario such as the one described, is the performance and time dedication to written competence (Cabrera, 2014 & López Urdaneta, 2011). Consequently, we will explore whether the union of all the theoretical baggage that underpins the linguistic and pedagogical tradition of Error Analysis, one of the most notable tools for enhancing the writing competence of English as a Foreign Language, and new intelligent technologies can provide new perspectives and strategies to effectively help learners of English as a Foreign Language to produce more appropriate written texts (more natural outputs) and, at the same time, to check whether an AI-assisted Error Analysis-based assessment produces better results in error avoidance and rule application in the collected writing samples.