This study focuses on the neural machine translation task for the TR-EN language pair, which is considered a low-resource language pair. We investigated fine-tuning strategies for pre-trained language models. Specifically, we explored the effectiveness of parameter-efficient adapter methods for fine-tuning multilingual pre-trained language models. Various combinations of LoRA and bottleneck adapters were experimented with. The combination of LoRA and bottleneck adapters demonstrated superior performance compared to other methods. This combination required only 5% of the pre-trained language model to be fine-tuned. The proposed method enhances parameter efficiency and reduces computational costs. Compared to the full fine-tuning of the multilingual pre-trained language model, it showed only a 3% difference in the BLEU score. Thus, nearly the same performance was achieved at a significantly lower cost. Additionally, models using only bottleneck adapters performed worse despite having a higher parameter count. Although adding LoRA to pre-trained language models alone did not yield sufficient performance, the proposed method improved machine translation. The results obtained are promising, particularly for low-resource language pairs. The proposed method requires less memory and computational load while maintaining translation quality.