Thermal imagery is emerging as a viable candidate for 24-7, all-weather pedestrian detection owning to thermal sensors' robust performance for pedestrian detection under different weather and illumination conditions. Despite the promising results obtained from combining visible (RGB) and thermal cameras in multi-spectral fusion techniques, the complex synchronization requirements, including alignment and calibration of sensors, impede their deployment in real-world scenarios. In this paper, we introduce a novel approach for domain adaptation to enhance the performance of pedestrian detection based solely on thermal images. Our proposed approach involves several stages. Firstly, we use both thermal and visible images as input during the training phase. Secondly, we leverage a thermal-to-visible hallucination network to generate feature maps that are similar to those generated by the visible branch. Finally, we design a transformer-based multi-modal fusion module to integrate the hallucinated visible and thermal information more effectively. The thermal-to-visible hallucination network acts as domain adaptation, allowing us to obtain pseudo-visual and thermal features using solely thermal input. Based on the experimental results, it is observed the mean average precision (mAP) increases by 4.72% and the miss rate decreases by 7.56% on the KAIST dataset when compared to the baseline model.