Classification into month-specific cutpoints is used to minimize misclassification associated with single measurement of serum 25(OH)D. This study aims to evaluate this strategy, and to compare it with the widely used classification into overall cutpoints. For this purpose, we studied 69 553 subjects in whom serum 25(OH)D was tested on two different occasions. The level of agreement between the quartiles of the first and second tests was 43.8% between the month-specific quartiles and 43.1% between the overall quartiles. The level of agreement between the quartiles of the two approaches was 80.0% and 94.3% in the first and second test, respectively. The extent of seasonal variation (summer-autumn as compared with winter-spring) of serum 25(OH)D was higher in males and in Jews, inversely associated with baseline levels, body mass index and age, and directly associated with socioeconomic class. The month-specific cutpoint strategy does not seem to offer advantage over the overall cutpoints strategy. Hence, determining the long-term average levels is considered to be the optimal measure of exposure. 3 Most of the association studies between vitamin D and dichotomous outcomes usually rely on a single measurement of serum 25(OH)D level, 4,5 leading to non-differential misclassification of the long-term exposure. Several approaches have been used to minimize exposure biases when creating 25(OH)D exposure categories; classification into month-or season-specific quartiles within each month or season of blood sampling is used by some studies in order to reduce the misclassification. 4,6,7 This approach assumes that subjects remain in their season or month-specific quartiles, 6 which may be representative of the long-term average levels. This study aims to evaluate this strategy, and to compare it with the widely used classification into overall cutpoints not accounting for the time of sampling collection.