In the field of education, ChatGPT has become a topic of debate for its usefulness as a learning tool. This article focuses on non-science majors' (n = 29) perceptions of a ChatGPT enabled final exam, where, prior to the exam, students wrote papers on science and sustainability and, during the final exam, students were asked to compare their paper to one produced on the same topic by ChatGPT. Thus, the underlying chemistry, its broader impacts, and connection to sustainability and writing styles were compared. Students' perceptions were analyzed through a developed coding framework that enabled the visualization of emerging themes. The most common themes revealed that students believed the ChatGPT essay did not read as "human-like", used more intricate words, and often did not include enough science to support its arguments. Students also noted that their essays provided more chemistry details and were easier to read as they focused on connecting chemistry concepts to their essay topic as well as sustainable policies and practices. Students were impressed, however, by ChatGPT's ability to discuss various sustainability solutions, policies, and practices. The final exam inspired self-reflection for the students to improve not only their writing but also their analysis of sustainability responses. Overall, students rated the comparative activity as a final exam to be favorable and remarked on the importance of analyzing AI generated work for the future of learning.