“…One example of such a radial nonlinearity is (3) 1+ Iii where the k are the net inputs to the radial slab, and that some applications, such as winner-take-all networks, and some kind of pattern classification, intrinsically require a nonlinear rotation of the hidden layer neuron inputs, (sometimes governed by attractor dynamics along the coordinate axis), yielding a hard decision-making capability. However, it appears that many other nonlinearly separable problems in logic, pattern recognition, and neural control, can be adequately and sometimes even advantageously handled with purely radial nonlinearities.31 The idea of radial nonlinear neural slabs is compared with conventional layers of point nonlinear neurons in Figure 1.…”