In recent years, deep learning (DL) models have become integral to numerous sectors, revolutionizing our daily lives and workflows. Particularly in healthcare, DL have brought about a paradigm shift in medical diagnosis through innovative image analysis capabilities. These computational tools offer exceptional precision and speed, significantly enhancing diagnostic accuracy and facilitating early disease detection. However, the widespread adoption and reliance on these models have opened the door to new forms of vulnerabilities, notably adversarial attacks. In the context of medical image diagnosis, adversarial attacks pose an alarming threat. They can manipulate diagnostic models into misinterpreting imaging data, leading to false positives or negatives. Such errors can result in misdiagnosis, delayed treatment, or unnecessary interventions, impacting patient safety and the overall quality of healthcare. This project delves into the landscape of adversarial attacks in the context of medical image diagnosis. This study looks at popular adversarial attack strategies. A Convolutional Neural Network (CNN) EfficientNet B0 Model trained to categorise Alzheimer's brain MRI images is subjected to the Vertical Perturbation attack, Fast Gradient sign Method (FGSM), and Square attack.Following that, it goes into one of the most common adversarial defence approaches, adversarial training. The performance of the model that has been trained on adversarial instances is next tested against the previously described attacks, and recommendations to improve the neural network's robustness are therefore supplied based on the experiment findings.