Foster et al. (2024) offer a new perspective on the validity of the predictors used for personnel selection. The heart of their argument is: (a) there are multiple sources of variance in the ratings that are widely used as criteria in estimating validity, (b) generalizability theory gives us a tool for partitioning these sources of variance, (c) person (e.g., ratee) main effects are the only source of reliable between-ratee variance in performance ratings that is predictable, (d) existing work on partitioning variance gives estimates of person main effect variance as accounting for about 25% of rating variance, (e) we should rescale our validity estimates as a percentage of this 25% possible explainable variance, (f) and by doing so we find that our predictors explain a larger proportion of explainable variance in job performance ratings. This is a novel and clever argument. However, we believe that there are several problematic assumptions made by Foster et al. ( 2024) that lead us to the conclusion that the proportion of explainable variance in performance ratings in typical validation research is in fact far higher than the 25% estimate used by Foster et al. ( 2024) Consequently, we have not underestimated the value of our predictors. Below we offer the series observations that lead to our conclusion.Faulty assumptions about sources of consistent-reliable, between-ratee variance Much of the argument made by Foster et al. ( 2024) is premised on the claim that person main effects are the only source of variance "specific to the person rated" (pg. 3) or more specifically, the only source of consistent between-ratee variance that can be predicted by the predictors commonly used in personnel selection. In this case, by "consistent" we mean variance that would be consistent (reliable) across indicators of a given performance dimension of interest such as raters and/or items. Unfortunately, this premise appears to reflect a fundamental misunderstanding of sources of consistent between-ratee variance in ratings, regardless of whether one is dealing with a multi-item, multisource rating assessment designed to assess multiple dimensions or a simpler, behaviorally anchored rating assessment where there is only one rating scale (item) per dimensions assessed.To set the stage we make three key points. First, sources of variance can viewed as a function of five components-person (p), item (i), dimension (d), rater (r), and source(s)-and all possible interaction between them. Second, a subset of these components reflects between-person variance that is consistent across raters (i.e., reliable variance from a traditional perspective), namely the person main effect and interaction effects that involve persons but that do not include rater. Any consistent between-person component that does not include raters is reliable from a traditional perspective and, thus, is potentially predictable. Third, not all effects may be uniquely estimated depending on one's measurement design. To illustrate, consider Figure 1, which provides a