Scholars have given little attention to testing's influence on the outcomes of university English preparatory programs (UEPP). Variations on the two main classifications of writing evaluation, the holistic and the analytic were examined. The objectives were to identify the assumptions for inclusion of writing in UEPPs, analyze the skills and abilities tested and finally, to examine the correlation between program assumptions and testing and the potential of different testing methods to impact student motivation. The aims and assumptions of the programs and course materials were analyzed via synchronic and diachronic comparisons of program structures and teaching materials, using two examples from the past and one currently in use. Results revealed that testing instruments designed and used only for grading, failing and promotion of students do not provide constructive student feedback, which is a demotivating factor. Testing and evaluation in general should be primarily constructive and positive. UEPP writing examinations should be evaluated analytically rather than holistically for reasons of fairness and to provide constructive and serious feedback to students. Rubrics should be constructed for the marking of paragraphs and essays to ensure fair and consistent marking in large programs with team teaching. The objective testing of writing skills must be implemented to support instructional goals. Thus, evaluation should be analytical, not holistic. The attention drawn to the linking of student motivation to elements of analytical writing evaluation is the significant contribution of this study.