Back Propagation (BP) algorithm is one of the oldest learning techniques used by Artificial Neural Networks (ANN). It has successfully been implemented in various practical problems. However, the algorithm still faces some drawbacks such as getting easily stuck at local minima and needs longer time to converge on an acceptable solution. Recently, the introduction of Second Order Methods has shown a significant improvement on the learning in BP but it still has some drawbacks such as slow convergence and complexity. To overcome these limitations, this research proposed a modified approach for BP by introducing the Conjugate Gradient and Quasi-Newton which were Second Order methods together with 'gain' parameter. The performances of the proposed approach is evaluated in terms of lowest number of epochs, lowest CPU time and highest accuracy on five benchmark classification datasets such as Glass, Horse, 7Bit Parity, Indian Liver Patient and Lung Cancer. The results show that the proposed Second Order methods with 'gain' performed better than the BP algorithm.