Domain Adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain. While remarkable advances have been witnessed recently, the power of DA methods still heavily depends on the network depth, especially when the domain discrepancy is large, posing an unprecedented challenge to DA in low-resource scenarios where fast and adaptive inference is required. How to bridge transferability and resourceefficient inference in DA becomes an important problem. In this paper, we propose Resource Efficient Domain Adaptation (REDA), a general framework that can adaptively adjust computation resources across "easier" and "harder" inputs. Based on existing multiexit architectures, REDA has two novel designs: 1) Transferable distillation to distill the transferability of top classifier into the early exits; 2) Consistency weighting to control the distillation degree via prediction consistency. As a general method, REDA can be easily applied with a variety of DA methods. Empirical results and analyses justify that REDA can substantially improve the accuracy and accelerate the inference under domain shift and low resource. CCS CONCEPTS • Computing methodologies → Transfer learning; Neural networks;