Recently an algorithm, was discovered, which separates points in n-dimension by planes in such a manner that no two points are left un-separated by at least one plane [1][2][3]. By using this new algorithm we show that there are two ways of classification by a neural network, for a large dimension feature space, both of which are non-iterative and deterministic. To demonstrate the power of both these methods we apply them exhaustively to the classical pattern recognition problem: The Fisher-Anderson's, IRIS flower data set and present the results.It is expected these methods will now be widely used for the training of neural networks for Deep Learning not only because of their non-iterative and deterministic nature but also because of their efficiency and speed and will supersede other classification methods which are iterative in nature and rely on error minimization.Additional Key Words and Phrases: non-iterative training, neural networks, pattern recognition, algorithms ACM Reference Format: K.Eswaran and K.Damodhar Rao, 2015.On non-iterative training of a neural classifier.1 The number of planes,q, required to do this would be such that q = O(log 2 (N )) this empirical estimate of q gets better when the number of dimensions, n, gets bigger. (For eg. for a particular set of random points, generated by the authors, where N = 2, 000 and n = 15, it was only 22 planes (i.e. q = 22) where as for another case when N = 50, 000 and n = 25 it was found that only 5 more planes ie 27 planes(q = 27) separated all the points, this increase is as per the empirical formula since log 2 (50, 000/2000) = log 2 (25) = 4.65 ≈ 5) planes.)See Ref [12] for details.