An interesting tool for non-linear multivariable modeling is the Artificial Neural Network (ANN) which has been developed recently. The use of ANN has been proved to be a cost-effective technique. It is very important to choose a suitable algorithm for training a neural network. Generally Backpropagation (BP) algorithm is used to train the neural network. While these algorithms prove to be very effective and robust in training many types of network structures, they suffer from certain disadvantages such as easy entrapment in a local minimum and very slow convergence. In this paper, to improve the performance of ANN, the adjustment of network weights using Particle Swarm Optimization (PSO) was proposed as a mechanism and the results obtained were compared with various BP algorithms such as LevenbergMarquardt and gradient descent algorithms. Each of these networks runs and trains for different learning ratios, activation functions and numbers of neurons within their hidden layer. Among different criteria Mean Square Error (MSE) and Accuracy are the main selected criteria used for evaluating both models. Also the MSE was used as a criterion to specify optimum number of neurons in hidden layer. The results showed that PSO approach outperforms the BP for training neural network models.