The recent advancements in the fields of AI technology, generative AI, and Large Language Models (LLMs) have increased the potential of the deployment of such tools in educational environments, especially in contexts where student assessment fairness, quality, and automation are a priority. This study introduces an AI-enhanced evaluation tool that utilizes OpenAI’s GPT-4 and the recently released custom GPT feature to evaluate the typography designs of 25 students enrolled in the Visual Media diploma offered by King Abdulaziz University. A mixed methods approach is adopted to evaluate the performance of this tool against the rubric-based evaluations offered by two human evaluators, considering both grading and text feedback. The results indicate that there are statistically significant differences between the AI tool’s grading and feedback when compared to that of Evaluator 2; however, none is reported with Evaluator 1. The study presents a qualitative interpretation of the comprehensive feedback by the evaluator and reflects in further research in this area.