This paper describes two variants of the Kohonen's self-organizing feature map (SOFM) algorithm. Both variants update the weights only after presentation of a group of input vectors. In contrast, in the original algorithm the weights are updated after presentation of every input vector. The main advantage of these variants is to make available a finer grain of parallelism, for implementation on machines with a very large number of processors, without compromising the desired properties of the algorithm. In this work it is proved that, for one-dimensional (1-D) maps and 1-D continuous input and weight spaces, the strictly increasing or decreasing weight configuration forms an absorbing class in both variants, exactly as in the original algorithm. Ordering of the maps and convergence to asymptotic values are also proved, again confirming the theoretical results obtained for the original algorithm. Simulations of a real-world application using two-dimensional (2-D) maps on 12-D speech data are presented to back up the theoretical results and show that the performance of one of the variants is in all respects almost as good as the original algorithm. Finally, the practical utility of the finer parallelism made available is confirmed by the description of a massively parallel hardware system that makes effective use of the best variant.