Deep Transfer Learning (DTL) signifies a novel paradigm in machine learning, merging the superiorities of deep learning in feature representation with the merits of transfer learning in knowledge transference. This synergistic integration propels DTL to the forefront of research and development within the Intelligent Fault Diagnosis (IFD) sphere. While the early DTL paradigms, reliant on fine-tuning, demonstrated effectiveness, they encountered considerable obstacles in complex domains. In response to these challenges, Adversarial Deep Transfer Learning (ADTL) emerged. This review first categorizes ADTL into non-generative and generative models. The former expands upon traditional DTL, focusing on the efficient transference of features and mapping relationships, while the latter employs technologies such as Generative Adversarial Networks (GANs) to facilitate feature transformation. A thorough examination of the recent advancements of ADTL in the IFD field follows. The review concludes by summarizing the current challenges and future directions for DTL in fault diagnosis, including issues such as data imbalance, negative transfer, and adversarial training stability. Through this cohesive analysis, this review aims to offer valuable insights and guidance for the optimization and implementation of ADTL in real-world industrial scenarios.