Tests serve an important role in computing education, measuring achievement and differentiating between learners with varying knowledge. But tests may have flaws that confuse learners or may be too difficult or easy, making test scores less valid and reliable. We analyzed the Second Computer Science 1 (SCS1) concept inventory, a widely used assessment of introductory computer science (CS1) knowledge, for such flaws. The prior validation study of the SCS1 used Classical Test Theory and was unable to determine whether differences in scores were a result of question properties or learner knowledge. We extended this validation by modeling question difficulty and learner knowledge separately with Item Response Theory (IRT) and performing expert review on problematic questions. We found that three questions measured knowledge that was unrelated to the rest of the SCS1, and four questions were too difficult for our sample of 489 undergrads from two universities.