Attacks against deep learning (DL) models are considered a significant security threat. However, DL especially deep convolutional neural networks (CNN) has shown extraordinary success in a wide range of medical applications, recent studies have recently proved that they are vulnerable to adversarial attacks. Adversarial attacks are techniques that add small, crafted perturbations to the input images that are practically imperceptible from the original but misclassified by the network. To address these threats, in this paper, a novel defense technique against white-box adversarial attacks based on CNN fine-tuning using the weights of the pre-trained deep convolutional autoencoder (DCAE) called Robust Defense Model against Adversarial Attacks (RDMAA), for DL-based cancer diagnosis is introduced. Before feeding the classifier with adversarial examples, the RDMAA model is trained where the perpetuated input samples are reconstructed. Then, the weights of the previously trained RDMAA are used to fine-tune the CNN-based cancer diagnosis models. The fast gradient method (FGSM) and the project gradient descent (PGD) attacks are applied against three DL-cancer modalities (lung nodule X-ray, leukemia microscopic, and brain tumor magnetic resonance imaging (MRI)) for binary and multiclass labels. The experiment's results proved that under attacks, the accuracy decreased to 35% and 40% for X-rays, 36% and 66% for microscopic, and 70% and 77% for MRI. In contrast, RDMAA exhibited substantial improvement, achieving a maximum absolute increase of 88% and 83% for X-rays, 89% and 87% for microscopic cases, and 93% for brain MRI. The RDMAA model is compared with another common technique (adversarial training) and outperforms it. Results show that DL-based cancer diagnoses are extremely vulnerable to adversarial attacks, even imperceptible perturbations are enough to fool the model. The proposed model RDMAA provides a solid foundation for developing more robust and accurate medical DL models.