BERTScore is an automatic evaluation metric for machine translation. It calculates similarity scores between candidate and reference tokens through embeddings. The quality of embeddings is crucial, but embeddings of low-resource languages tend to be poor. Multilingual pre-trained models can transfer knowledge from rich-resource languages to low-resource languages, but embeddings from these models are not always well aligned. To improve BERTScore for low-resource languages, we attempt to align embeddings by fine-tuning pre-trained models via contrastive learning which shortens the distance between semantically similar sentences and increases the distance between dissimilar sentences. We experiment on Hausa, a low-resource language, in the WMT21 English-Hausa translation task. We conduct fine-tuning on three different pre-trained models (XLM-R, mBERT, LaBSE). Our experimental results show that our proposed method not only achieves higher correlation with human judgments than original BERTScore, but also surpass surface-based metrics such as BLEU, chrF, and the state-of-the-art metric COMET, when fine-tuning mBERT. Moreover, our proposed method generates better embeddings than pre-trained embedding models (E5, BGE, M3E) which are fine-tuned on different NLP tasks. We also extend our experiments to Chinese, a rich-resource language, in an English-Chinese translation task, and further confirms the effectiveness of our method.