This paper proposes a low-cost approximate dynamic ranged multiplier and describes its use during the training process on convolutional neural networks (CNNs). It has been noted that the approximate multiplier can be used in the convolution of CNN's forward path. However, in CNN inference on a post-training quantization with a pre-trained model, erroneous convolution output from highly approximate multipliers significantly degrades performance. On the other hand, with the CNN model based on an approximate multiplier, the approximation-aware training process can optimize its learnable parameters, producing better classification results considering the approximate hardware. We analyze the error distribution of the approximate dynamic ranged multiplication and characterize it in order to find the most suitable approximate multiplier design. Considering the effects of normalizing the biased convolution outputs, a low standard deviation of relative errors with respect to the multiplication outputs leads to negligible accuracy drop. Based on these facts, the hardware costs of the proposed multiplier can be further reduced by adopting the partial products' inaccurate compression, truncated input fraction, and reduced-width multiplication output. When the proposed approximate multiplier is applied to the residual convolutional neural networks for the CIFAR-100 and Tiny-ImageNet datasets, the accuracy drops of the approximation-aware training results are negligible compared to those using 32-bit floating-point CNNs.INDEX TERMS Approximate computing, approximate multiplier, approximation-aware training, convolutional neural network, probabilistic multiplier.