A systematic four-step batch approach is presented for the second-order training of radial basis function (RBF) neural networks for estimation. First, it is shown that second-order training works best when applied separately to several disjoint parameter subsets. Newton's method is used to find distance measure weights, leading to a kind of embedded feature selection. Next, separate Newton's algorithms are developed for RBF spread parameters, center vectors, and output weights. The final algorithm's training error per iteration and per multiply are compared to those of other algorithms, showing that convergence speed is reasonable. For several widely available datasets, it is shown that tenfold testing errors of the final algorithm are less than those for recursive least squares, the error correction algorithm, the support vector regression training, and Levenberg-Marquardt.