To reduce overall patient radiation exposure in some clinical scenarios (since cancer patients need frequent follow-ups), noncontrast CT is not used in some institutions. However, although less desirable, noncontrast CT could provide additional important information. In this article, we propose a deep subtraction residual network based on adjacency content transfer to reconstruct noncontrast CT from contrast CT and maintain image quality comparable to that of a CT scan originally acquired without contrast. To address the slight structural dissimilarity of the paired CT images (noncontrast CT and contrast CT) due to involuntary physiological motion, we introduce a contrastive loss network derived from the adjacency content-transfer strategy. We evaluate the results of various similarity metrics (MSE, SSIM, NRMSE, PSNR, MAE) and the fitting curve (HU distribution) of the output mapping to estimate the reconstruction performance of the algorithm. To build the model, we randomly select a total of 15,405 CT paired images (noncontrast CT and contrast-enhanced CT) for training and 10,270 CT paired images for testing. The proposed algorithm preserves the robust structures from the contrast-enhanced CT scans and learns the noncontrast attenuation pattern from the noncontrast CT scans. During the evaluation, the deep subtraction residual network achieves higher MSE, MAE, NRMSE, and PSNR scores (by 30%) than those of the baseline models (BEGAN, CycleGAN, Pixel2Pixel) and better simulates the HU curve of noncontrast CT attenuation. After validation based on an analysis of the experimental results, we can report that the noncontrast CT images reconstructed by our proposed algorithm not only preserve the high-quality structures from the contrast-enhanced CT images, but also mimic the CT attenuation of the originally acquired noncontrast CT images.