This paper presents modified parallel architectures for multidimensional (-d) convolution. We show that for two-dimensional (2-d) convolutions, with careful design, the number of lower-order 2-d convolutions can be reduced from nine to six with a computation saving of 33%. Moreover, the original speed of the computations is not affected. The proposed partitioning strategy results in a core of data-independent convolution computations, and can be generalized to the-d convolution. The resulting very large scale integration networks have very simple modular structure, highly regular topology, and use simple arithmetic devices.