Convolutional neural networks (CNNs) outperform traditional machine learning algorithms across a wide range of applications, such as object recognition, image segmentation, and autonomous driving. However, their ever-growing computational complexity makes it necessary to design efficient hardware accelerators. Most CNN accelerators focus on exploring various dataflow styles and designs that exploit computational parallelism. However, potential performance improvement from sparsity (in activations and weights) has not been adequately addressed. The computation and memory footprint of CNNs can be significantly reduced if sparsity is exploited in network evaluations. Therefore, different pruning methods have been proposed to increase sparsity. To take advantage of sparsity, some accelerator designs explore sparsity encoding and evaluation on CNN accelerators. However, sparsity encoding is just performed on activation data or CNN weights and only used in inference. It has been shown that activations and weights also have high sparsity levels during the network training phase. Hence, sparsity-aware computation should also be considered in the training phase. To further improve performance and energy efficiency, some accelerators evaluate CNNs with limited precision. However, this is limited to the inference phase since reduced precision sacrifices network accuracy if used in training. In addition, CNN evaluation is usually memory-intensive, especially during training. The performance bottleneck arises from the fact that the memory cannot feed the computational units enough data, resulting in idling of these computational units and thus low utilization ratios. A 3D memory interface has been used on high-end GPUs to alleviate memory bandwidth shortage. In this article, we propose SPRING, a SParsity-aware Reduced-precision Monolithic 3D CNN accelerator for trainING and inference. SPRING supports both CNN training and inference. It uses a binary mask scheme to encode sparsities in activations and weights. It uses the stochastic rounding algorithm to train CNNs with reduced precision without accuracy loss. To alleviate the memory bottleneck in CNN evaluation, especially during training, SPRING uses an efficient monolithic 3D nonvolatile memory interface to increase memory bandwidth. Compared to Nvidia GeForce GTX 1080 Ti, SPRING achieves 15.6×, 4.2×, and 66.0× improvements in performance, power reduction, and energy efficiency, respectively, for CNN training, and 15.5×, 4.5×, and 69.1× improvements in performance, power reduction, and energy efficiency, respectively, for inference.