Designing face recognition systems that are capable of matching face images obtained in the thermal spectrum with those obtained in the visible spectrum is a challenging problem. In this work, we propose the use of semantic-guided generative adversarial network (SG-GAN) to automatically synthesize visible face images from their thermal counterparts. Specifically, semantic labels, extracted by a face parsing network, are used to compute a semantic loss function to regularize the adversarial network during training. These semantic cues denote high-level facial component information associated with each pixel. Further, an identity extraction network is leveraged to generate multi-scale features to compute an identity loss function. To achieve photo-realistic results, a perceptual loss function is introduced during network training to ensure that the synthesized visible face is perceptually similar to the target visible face image. We extensively evaluate the benefits of individual loss functions, and combine them effectively to learn the mapping from thermal to visible face images. Experiments involving two multispectral face datasets show that the proposed method achieves promising results in both face synthesis and cross-spectral face matching.