Polarimetric inverse synthetic aperture radar (ISAR), with its ability to operate in all conditions, plays an important role in space surveillance. The compact polarimetric mode balances hardware complexity and polarimetric information, commonly equipped with ISAR systems. However, the generation of high-resolution ISAR images usually requires a large bandwidth and coherent integration angle, which is constrained by the equipment’s physical conditions. At present, supervised learning methods are often used for image super-resolution in computer vision. However, super-resolution performance is often hampered by the occurrence of artifacts and the inadequate consideration of low-frequency information in low-resolution image data. To address these limitations, this work presents a semantic information guided semi-supervised deep-learning method. This framework incorporates implicit neural representation to extract and better utilize information from low-resolution ISAR images. In addition, semantic and super-resolution information are integrated to regulate the training process. Datasets comprising images and semantic information of compact polarimetric ISAR for satellite targets are constructed. The proposed method yields more elaborate super-resolution results with fewer artifacts. Quantitative evaluations are also carried out using the Peak-Signal-to-Noise (PSNR) metric. Compared with the typical methods, the proposed approach achieves superior super-resolution performance, with a performance improvement of at least 1.394 dB.