When designing classifiers for classification tasks, one is often confronted with situations where data distributions in the source domain are different from those present in the target domain. This problem of domain adaptation is an important problem that has received a lot of attention in recent years. In this paper, we study the challenging problem of unsupervised domain adaptation, where no labels are available in the target domain. In contrast to earlier works, which assume a single domain shift between the source and target domains, we allow for multiple domain shifts. Towards this, we develop a novel framework based on the parallel transport of union of the source subspaces on the Grassmann manifold. Various recognition experiments show that this way of modeling data with union of subspaces instead of a single subspace improves the recognition performance.