One of the fundamental assumptions behind many supervised machine-learning algorithms is that training and test data follow the same probability distribution. However, this important assumption is often violated in practice, for example, because of an unavoidable sample selection bias or nonstationarity of the environment. Owing to violation of the assumption, standard machine-learning methods suffer a significant estimation bias. In this article, we consider two scenarios of such distribution change-the covariate shift where input distributions differ and class-balance change where class-prior probabilities vary in classification-and review semi-supervised adaptation techniques based on importance weighting. FIGURE 1 | Covariate shift. Input distributions change but the conditional distribution of outputs given inputs does not change. (a) Input densities and importance, (b) Learning target function, training samples, and test samples.466