On-line learning of both binary and continuous rules in an Ising space is studied. Learning is achieved by using an artificial parameter, a weight vector J , which is constrained to the surface of a hypersphere (spherical constraint). In the case of a binary rule the generalization error decays to zero super-exponentially as exp(−Cα 2), where α is the number of examples divided by N, the size of the input vector, and C > 0. Much faster learning is obtained in the case of continuous activation functions where the generalization error decays as exp(−e |λ|α). The number of steps required for perfect learning is estimated for both scenarios and compared with simulations.