Convolutional Neural Networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks, in the context of classification of chest radiographs (Xrays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by γ = 0.3 with a maximum loss of accuracy tolerated at 2%.