Nowadays, the developed deep neural networks (DNN) have been widely applied to synthetic aperture radar (SAR) image interpretation, such as target classification and recognition, which can automatically learn high-level semantic features in data-driven and task-driven manners. For the supervised learning methods, abundant labeled samples are required to avoid the over-fitting of designed networks, which is usually difficult for SAR image applications. To address these issues, a novel two-stage algorithm based on contrastive learning (CL) is proposed for SAR image target classification. In the pretraining stage, to extract self-supervised representations (SSRs) from an unlabeled train set, a convolutional neural network (CNN)-based encoder is first pre-trained using a contrasting strategy. This encoder can convert SAR images into a discriminative embedding space. Meanwhile, the optimal encoder can be determined using a linear evaluation protocol, which can indirectly confirm the transferability of pre-learned SSRs to downstream tasks. Therefore, in the fine-tuning stage, a SAR target classifier can be adequately trained using a few labeled SSRs in a supervised manner, which benefits from the powerful pre-trained encoder. Numerical experiments are carried out on the shared MSTAR dataset to demonstrate that the model based on the proposed self-supervised feature learning algorithm is superior to the conventional supervised methods under labeled data constraints. In addition, knowledge transfer experiments are also conducted on the openSARship dataset, showing that the encoder pre-trained from the MSTAR dataset can support the classifier training with high efficiency and precision. These results demonstrate the excellent training convergence and classification performance of the proposed algorithm.