Background: Metal artifacts introduce challenges in image-guided diagnosis or accurate dose calculations. This study aims to reduce metal artifacts from the spinal brace by using virtual generated artifacts through convolutional neural networks and to compare the performance of this approach with two other methods, namely, linear interpolation metal artifact reduction (LIMAR) and normalized metal artifact reduction (NMAR) .Method: A total of 3,600-slice CT images of 60 vertebral metastases patients were selected. The spinal cord center was marked in each image, metal masks were added to two sides of the marker to generate artifact-insert CT images, and the CT values of the metal parts were copied to original CT images to obtain reference CT images. These images were divided into training (3,000 slices) and test (600) sets. The modified U-Net and pix2pix architecture was applied to understand the relationship between the reference and artifact-insert images. The mean absolute error (MAE), mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) were calculated between the reference CT images and the predicted CT through LIMAR, NMAR, U-Net, and pix2pix. The CT values of organs from different images were compared. Radiotherapy treatment plans for vertebral metastases were designed, and dose calculation was performed. The dose distribution in different types of images was also compared.Results: The MAE values between the reference images and those images generated by LIMAR, NMAR, U-Net, and pix2pix were 15.02, 16.16, 6.12, and 6.48 HU, respectively, and the corresponding PSNR values were 15.37, 152.70, 158.93, and 65.14 dB, respectively. Pix2pix restored more texture than U-Net according to the visual comparison. The average CT values in the artifact-insert images of the liver, spleen, and left and right kidneys were all significantly higher than those of the reference images (p<0.05). The average CT values of the organs in images processed by the four methods showed no significant differences from those of the organs in the reference images. The mean dose of planned target volume in the artifact-insert images was significantly lower than that in the reference CT images. The average γ passing rate (1%, 1 mm) of the artifact-insert images was significantly lower than that of the reference images (95.9±1.4% vs. 99.2±1.4%, p<0.05).Conclusions: U-Net and pix2pix deep learning networks can remarkably reduce metal artifacts and improve critical structure visualization compared with LIMAR and NMAR according to the simulation data of artifact-insert images in the spinal brace. Pix2pix can restore more texture with the help of a discriminator. Metal artifacts increase the dose calculation uncertainty in radiotherapy. The dose calculated through images obtained by U-Net and pix2pix was identical with that calculated through reference images.