Effective distractors in multiple-choice items should attract lower ability students, those with misconceptions or limited knowledge and skills, because they are based on common misconceptions or errors in logic. A large, multi-state data set collected for a quasi-experimental study on test modifications was analyzed to measure the impact on distractor functioning. The key modification of interest was the removal of the weakest of three distractors, from 39 items in reading and 39 items in mathematics. Distractor functioning was neither systematically improved nor systematically weakened through the modification process. However, more than 70% of the distractors became more discriminating. A moderate correlation between distractor selection rate and distractor discrimination, in mathematics, may have indicated that the modified items were being missed by the appropriate students. Implications of these findings for test developers are discussed.