Identification in most sample selection models depends on the independence of the regressors and the error terms conditional on the selection probability. All quantile and mean functions are parallel in these models; this implies that quantile estimators cannot reveal any-per assumption non-existing-heterogeneity. Quantile estimators are nevertheless useful for testing the conditional independence assumption because they are consistent under the null hypothesis. We propose tests of the Kolmogorov-Smirnov type based on the conditional quantile regression process. Monte Carlo simulations show that their size is satisfactory and their power sufficient to detect deviations under plausible data-generating processes. We apply our procedures to female wage data from the 2011 Current Population Survey and show that homogeneity is clearly rejected. assumption is an identifying assumption; its violation leads to the inconsistency of most sample selection estimators (including those only concerned with mean effects). To the best of our knowledge, ours is the first test of this identifying assumption.To implement this testing idea we use the sample selection correction for quantile regression proposed by Buchinsky (1998aBuchinsky ( , 2001. These papers extended the series estimator of Newey (2009) to the estimation of quantiles. The estimator has been applied to many data-among others in both original articles-to analyze the heterogeneous effects of the explanatory variables on the distribution of the outcome. Another contribution of this paper is to show that this estimator is consistent only in two cases: when all quantile regression slopes are equal or when selection is random. The reason is that Buchinsky (1998a) assumes that the error terms are independent of the regressors given the selection probability. This implies that all quantile slope coefficients and the mean coefficients are identical; i.e. it excludes heterogeneous effects even though their analysis has been the main motivation for using quantile regression in recent applications.The Buchinsky estimator nevertheless remains useful for two reasons, which were the initial motivations for quantile regression: it is robust and can be used as the basis for a test of independence. Koenker and Bassett (1978) also assume independence in their seminal paper and motivate quantile regression purely from a robustness and efficiency point of view in the presence of non-Gaussian errors. Similarly, the estimator of Buchinsky has a bounded influence function (in the direction of the dependent variable), so that it is more robust than mean regression. Under the null hypothesis of conditional independence the slope coefficients are constant as a function of the quantile, and the procedure proposed by Buchinsky (1998a) consistently estimates them. Under the alternative hypothesis, the estimates do not converge to the true values but will be a non-trivial function of the quantile. These two properties justify testing the conditional independence assumption by testing whether th...