Recent object detection networks suffer from performance degradation when training data and test data are distinct in image styles and content distributions. In this paper, we propose a domain adaptive method, Adversarial Mixing (AdvMix), where the label-rich source domain and unlabeled target domain are jointly trained by the adversarial feature alignment and a self-training strategy. To diminish the style gap, we design the Adversarial Gradient Reversal Layer (AdvGRL), containing a global-level domain discriminator to align the domain features by gradient reversal, and an adversarial weight mapping function to enhance the stability of domain-invariant features by hard example mining. To eliminate the content gap, we introduce a region mixing self-supervised training strategy where a region of the target image with the highest confidence is selected to merge with the source image, and the synthesis image is self-supervised by the consistency loss. To improve the reliability of self-training, we propose a strict confidence metric combining both object and bounding box uncertainty. Extensive experiments conducted on three benchmarks demonstrate that AdvMix achieves prominent performance in terms of detection accuracy, surpassing existing domain adaptive methods by nearly 5% mAP.