This study introduces a distributive convolution framework for feature mapping based on the distributive property of matrix multiplication, aiming to improve computing speed of convolutions for the two-dimensional (2D) plane and three-dimensional (3D) volumetric images. This method distributes the 2D and 3D kernels (filters) to the 1D (one-dimensional) and 2D ones, and apply convolutions concurrently at each spatial direction, and then fuses (adds) results together. It is different than the spatially separable convolution based on the associative property which applies convolution with separated filters in sequence. The Gaussian and Laplacian filters using the distributive convolution are evaluated on some public images for de-noise and edge detection. Results show that this method achieves similar effects as compared to the traditional convolution, and even improves the edge detection performance for some cases. The fusion of direct linear addition achieves similar performance as that of root mean square. In comparison, the spatially separable convolution obtains similar effect for the Gaussian blur but inferior performance for edge detection.