Abstract.Lowe [1] proposed that the kernel parameters of a radial basis function (RBF) neural network may first be fixed and the weights of the output layer can then be determined by pseudo-inverse. Jang, Sun, and Mizutani (p.342 [2]) pointed out that this type of two-step training methods can also be used in fuzzy neural networks (FNNs). By extensive computer simulations, we [3] demonstrated that an FNN with randomly fixed membership function parameters (FNN-RM) has faster training and better generalization in comparison to the classical FNN. To provide a theoretical basis for the FNN-RM, we present an intuitive proof of the universal approximation ability of the FNN-RM in this paper, based on the orthogonal set theory proposed by Kaminski and Strumillo for RBF neural networks [4].