In many practical applications data used for training a machine learning model and the deployment data does not always preserve the same distribution. Transfer learning and, in particular, domain adaptation allows to overcome this issue, by adapting the source model to a new target data distribution and therefore generalizing the knowledge from source to target domain. In this work, we present a method that makes the adaptation process more transparent by providing two complementary explanation mechanisms. The first mechanism explains how the source and target distributions are aligned in the latent space of the domain adaptation model. The second mechanism provides descriptive explanations on how the decision boundary changes in the adapted model with respect to the source model. Along with a description of a method, we also provide initial results obtained on publicly available, real-life dataset.