In this work, we propose a Selective Fine-Tuning algorithm for Bayesian Networks (SFTBN). The aim is to enhance the accuracy of Bayesian Network (BN) classifiers by finding better estimations for the probability terms used by the classifiers. The algorithm augments a BN learning algorithm with a fine-tuning stage that aims to more accurately estimate the probability terms used by the BN. If the value of a probability term causes a misclassification of a training instances and falls outside its valid range then we update (fine-tune) that value. The amount of such an update is proportional to the distance between the value and its valid range. We use the algorithm to fine-tune several forms of BNs: the Naive Bayes (NB), Tree Augmented Naive Bayes (TAN), and Bayesian Augmented Naive Bayes (BAN) models. Our empirical experiments indicate that the SFTBN algorithm improves the classification accuracy of BN classifiers. We also generalized the original fine-tuning algorithm of Naive Bayesian (FTNB) for BN models. We empirically compare the two algorithms, and the empirical results show that while FTNB is more accurate than SFTBN for fine-tuning NB classifiers, SFTBN is more accurate for fine-tuning BNs than the adapted version of FTNB.