Bayesian networks are probabilistic graphical models that have proven to be able to handle uncertainty in many realworld applications. One key issue in learning Bayesian networks is parameter estimation, i.e., learning the local conditional distributions of each variable in the model. While parameter estimation can be performed efficiently when complete training data is available (i.e., when all variables have been observed), learning the local distributions becomes difficult when latent (hidden) variables are introduced. While Expectation Maximization (EM) is commonly used to perform parameter estimation in the context of latent variables, EM is a local optimization method that often converges to sub-optimal estimates. Although several authors have improved upon traditional EM, few have applied population based search techniques to parameter estimation, and most existing population-based approaches fail to exploit the conditional independence properties of the networks. We introduce two new methods for parameter estimation in Bayesian networks based on particle swarm optimization (PSO). The first is a single swarm PSO, while the second is a multiswarm PSO algorithm. In the multi-swarm version, a swarm is assigned to the Markov blanket of each variable to be estimated, and competition is held between overlapping swarms. Results of comparing these new methods to several existing approaches indicate that the multi-swarm algorithm outperforms the competing approaches when compared using data generated from a variety of Bayesian networks.