Similarity-based retrieval of semantic graphs is a crucial task of Process-Oriented Case-Based Reasoning (POCBR) that is usually complex and time-consuming, as it requires some kind of inexact graph matching. Previous work tackles this problem by using Graph Neural Networks (GNNs) to learn pairwise graph similarities. In this paper, we present a novel approach that improves on the GNN-based case retrieval with a Transfer Learning (TL) setup, composed of two phases: First, the pretraining phase trains a model for assessing the similarities between graph nodes and edges and their semantic annotations. Second, the pretrained model is then integrated into the GNN model by either using fine-tuning, i.e., the parameters of the pretrained model are further trained, or feature extraction, i.e., the parameters of the pretrained model are converted to constants. The experimental evaluation examines the quality and performance of the models based on TL compared to the GNN models from previous work for three semantic graph domains with various properties. The results show the great potential of the proposed approach for reducing the similarity prediction error and the training time.