Collecting a large dataset of real infrared (IR) images is expensive, time-consuming, and even unavailable in some specific scenarios. With recent progress in machine learning, it has become more feasible to replace real IR images with qualified synthetic IR images in learning-based IR systems. However, this alternative may fail to achieve the desired performance, due to the gap between real and synthetic IR images. Inspired by adversarial learning for image-to-image translation, we propose the Synthetic IR Refinement Generative Adversarial Network (SIR-GAN) to narrow this gap. By learning the bidirectional mappings between two unpaired domains, the realism of the simulated IR images generated from the IR Simulator are significantly improved, where the source domain contains a large number of simulated IR images, where the target domain contains a limited quantity of real IR images. Specifically, driven by the idea of transferring infrared characteristic and protect target semantic information simultaneously, we propose a SIR refinement loss to consider an infrared loss and a structure loss further to the adversarial loss and the consistency loss. To further reduce the gap, stabilize training, and avoid artefacts, we modify the proposed algorithm by developing a training strategy, adding the U-net in the generators, using the dilated convolution in the discriminators and invoking the N-Adam acts as the optimizer. Qualitative, quantitative, and ablation study experiments demonstrate the superiority of the proposed approach compared with the state-of-the-art techniques in terms of realism and fidelity. In addition, our refined IR images are evaluated in the context of a feasibility study, where the accuracy of the trained classifier is significantly improved by adding our refined data into a small real-data training set.