In both theoretical and applied literatures, there is confusion regarding accurate values for expected Black-White subgroup differences in personnel selection test scores. Much confusion arises because empirical estimates of standardized subgroup differences (d) are subject to many of the same biasing factors associated with validity coefficients (i.e., d is functionally related to a point-biserial r). To address such issues, we review/cumulate, categorize, and analyze a systematic set of many predictor-specific meta-analyses in the literature. We focus on confounds due to general use of concurrent, versus applicant, samples in the literature on Black-White d. We also focus on potential confusion due to different constructs being assessed within the same selection test method, as well as the influence of those constructs on d. It is shown that many types of predictors (such as biodata inventories or assessment centers) can have magnitudes of d that are much larger than previously thought. Indeed, some predictors (such as work samples) can have ds similar to that associated with paper-and-pencil tests of cognitive ability. We present more realistic values of d for both researcher and practitioner use. Implications for practice and future research are noted.In the literature on personnel selection, as well as organizational application, much attention is given to both (a) the level of validity of individual predictors and (b) the level of adverse impact associated with such predictors. Organizations often focus on selection validity in order to increase the expected performance levels of their hires, increase organizational utility, and defend test use when adverse impact is present. Organizations