Grading is one of the most relevant hurdles for instructors, diverting instructor's focus on the development of engaging learning activities, class preparation, and attending to students' questions. Institutions and instructors are continuously looking for alternatives to reduce educators' time required on grading, frequently, resulting in hiring teaching assistants whose inexperience and frequent rotation can lead to inconsistent and subjective evaluations. Large Language Models (LLMs) like GPT‐4 may alleviate grading challenges; however, research in this field is limited when dealing with assignments requiring specialized knowledge, complex critical thinking, subjective, and creative. This research investigates whether GPT‐4's scores correlate with human grading in a construction capstone project and how the use of criteria and rubrics in GPT influences this correlation. Projects were graded by two human graders and three training configurations in GPT‐4: no detailed criteria, paraphrased criteria, and explicit rubrics. Each configuration was tested through 10 iterations to evaluate GPT consistency. Results challenge GPT‐4's potential to grade argumentative assignments. GPT‐4's score correlates slightly better (although poor overall) with human evaluations when no additional information is provided, underscoring the poor impact of the specificity of training materials for GPT scoring in this type of assignment. Despite the LLMs' promises, their limitations include variability in consistency and reliance on statistical pattern analysis, which can lead to misleading evaluations along with privacy concerns when handling sensitive student data. Educators must carefully design grading guidelines to harness the full potential of LLMs in academic assessments, balancing AI's efficiency with the need for nuanced human judgment.