Introduction. In neuropsychological research and clinical practice, a large battery of tests is often administered to determine whether an individual deviates from the norm. We formulate three criteria for such large battery normative comparisons. First, familywise false-positive error rate (i.e., the complement of specificity) should be controlled at, or below, a prespecified level. Second, sensitivity to detect genuine deviations from the norm should be high. Third, the comparisons should be easy enough for routine application, not only in research, but also in clinical practice. Here we show that these criteria are satisfied for current procedures used to assess an overall deviation from the norm-that is, a deviation given all test results. However, we also show that these criteria are not satisfied for current procedures used to assess test-specific deviations, which are required, for example, to investigate dissociations in a test profile. We therefore propose several new procedures to assess such test-specific deviations. These new procedures are expected to satisfy all three criteria. Method. In Monte Carlo simulations and in an applied example pertaining to Parkinson disease, we compare current procedures to assess test-specific deviations (uncorrected and Bonferroni normative comparisons) to new procedures (Holm, one-step resampling, and step-down resampling normative comparisons). Results. The new procedures are shown to: (a) control familywise false-positive error rate, whereas uncorrected comparisons do not; (b) have higher sensitivity than Bonferroni corrected comparisons, where especially step-down resampling is favorable in this respect; (c) be user-friendly as they are implemented in a user-friendly normative comparisons website, and as the required normative data are provided by a database. Conclusion. These new normative comparisons procedures, especially step-down resampling, are valuable additional tools to assess test-specific deviations from the norm in large test batteries.