In traditional machine learning, the learner assumes that the training and testing datasets are drawn according to the same distribution. However, in most practical scenarios, the two datasets are drawn according to two different distributions, the source distribution and the target distribution. In this context, the use of classical machine learning algorithms often fails as models trained on the source data provide poor performances on the target data. To solve this problem, many transfer learning techniques have been developed following one of the three main strategies: parameter-based transfer, instance-based transfer and feature-based transfer. The choice of the appropriate strategy is mainly determined by the nature of the shift between the source and target distributions. For example, to deal with the problem of sampling bias, when part of the population is over-or under-represented in the training set, instance-based approaches are useful to adequately reweight the source data in the training phase. If the shift is caused by a change in data acquisition, such as sensor drift, feature-based methods help to correct the shift by learning a common feature representation for the source and target data. For a real application, it is really a challenge to choose in advance the best transfer learning strategy and one often needs to evaluate different models in practice. As the different transfer methods were introduced by various contributors, no common framework is today available for a rapid development. To tackle this issue, we propose a Python library for transfer learning: ADAPT (Awesome Domain Adaptation Python Toolbox), which allows practitioners to compare the results of many methods on their particular problem.