The variability in different altitudes, geographical variances, and weather conditions across datasets degrade stateof-the-art (SOTA) DNN object detection performance. Unsupervised and Semi-supervised domain adaptation (DA) have been decent solutions to bridge the gap between two different distributions of datasets. The state-of-the-art pseudo-labeling process is susceptible to background noise, hindering the optimal performance in target datasets. The existing contrastive DA methods overlook the bias effect introduced from the false negative (FN) target samples, which misleads the complete learning process. This paper proposes DCLDA (support-guided debiased contrastive learning for domain adaptation) to properly label the unlabeled target dataset and remove the bias toward target detection. We introduce (i) A support-set curated approach to generate high-quality pseudo-labels from the target dataset proposals, (ii) a reduced distribution gap across different datasets using domain alignment on local, global, and instance-aware features for remote sensing datasets, and (iii) novel debiased contrastive loss function, that makes the model more robust for the variable appearance of a particular class over images and domains. The proposed debiased contrastive learning pivots on class probabilities to address the challenge of false negatives in the unsupervised framework. Our model outperforms the compared SOTA models with a minimum gain of +3.9%, +3.2%, +12.7%, and +2.1% of mAP for DIOR, DOTA, Visdrone, and UAVDT datasets, respectively.