Software testing is being added to programming courses at many schools, but current assessment techniques for evaluating student-written tests are imperfect. Code coverage measures are typically used in practice, but they have limitations and sometimes overestimate the true quality of tests. Others have proposed using mutation analysis instead, but mutation analysis poses a number of practical obstacles to classroom use. This paper describes a new approach to mutation analysis of studentwritten tests that is more practical for educational use, especially in an automated grading context. This approach combines several techniques to produce a novel solution that addresses the shortcomings raised by more traditional mutation analysis. An evaluation of this approach in the context of both CS1 and CS2 courses illustrates how it differs from code coverage analysis. At the same time, however, the evaluation results also raise questions of concern for CS educators regarding the relative value of more comprehensive assessment of test quality, the value of more open-ended assignments that offer significant design freedom for students, the cost of providing higher-quality reference solutions in order to support better quality assessment, and the cost of supporting assignments that require more intensive testing, such as GUI assignments.