This paper continues the development of a heuristic initialization methodology for designing multilayer feedforward neural networks aimed at modeling nonlinear functions for engineering mechanics applications as presented previously at SPIE 2003SPIE , and 2005SPIE to 2007. Seeking a transparent and domain knowledge-based approach for neural network initialization and result interpretation, the authors examine the efficiency of linear sums of sigmoidal functions while offering constructive methods to approximate functions in engineering mechanics applications. This study provides details and results of mapping the four arithmetic operations (summation, subtraction, multiplication, division) as well as other functions including reciprocal, Gaussian and Mexican hat functions into multilayer feedforward neural networks with one hidden layer. The approximation and training examples demonstrate the efficiency and accuracy of the proposed mapping techniques and details. Future work is also identified. This effort directly contributes to the further extension of the proposed initialization procedure in that it opens the door for the approximation of a wider range of nonlinear functions.