Creativity is one of the most crucial skills for success in the 21st-century workforce. Specifically, creativity is an important skill to have in science, technology, engineering, and mathematics (STEM)-related fields, and more empirical studies are needed to assess and improve creativity in STEM-related learning environments. In this study, we designed and validated an automated, unobtrusive, formative assessment of creativity in EarSketch, a computational music remixing platform where students learn to write Python or JavaScript code to create pieces of music. Using an existing data set of EarSketch projects (n = 53), we addressed two research questions: (Research Question 1) To what extent is the automated assessment of creativity that we designed in EarSketch psychometrically sound (focusing on validity and reliability), and (Research Question 2) what variables (i.e., divergent thinking, complexity, and self-report variables) predict students' creativity in EarSketch? Our main findings show that (a) the automated assessment of creativity has reasonable convergent validity (r = .47) and discriminant validity; (b) the automated assessment of creativity has a reliability estimate of .70; and (c) divergent thinking and the students' confidence in learning how to code significantly predicted students' creativity scores in an external, consensual assessment of creativity by EarSketch experts. Providing learning environments that can assess and support essential skills such as creativity alongside other STEM-related skills such as programming and computational thinking holds great promise for developing the next generation of the workforce who is not merely aware of STEM concepts and principles, but is creative and innovative in pursuing STEM solutions.