Objectives: Machine learning (ML) is a powerful asset to support physicians in decision-making procedures, providing timely answers. However, ML for health systems can suffer from security attacks and privacy violations. This paper investigates studies of security and privacy in ML for health.
Methods: We examine attacks, defenses, and privacy-preserving strategies, discussing their challenges. We conducted the following research protocol: starting a manual search, defining the search string, removing duplicated papers, filtering papers by title and abstract, then their full texts, and analyzing their contributions, including strategies and challenges. Finally, we collected and discussed 40 papers on attacks, defense, and privacy.
Results: Our findings identified the most employed strategies for each domain. We found trends in attacks, including universal adversarial perturbation (UAPs), generative adversarial network (GAN)-based attacks, and DeepFakes to generate malicious examples. Trends in defense are adversarial training, GAN-based strategies, and out-of-distribution (OOD) to identify and mitigate adversarial examples (AE). We found privacy-preserving strategies such as federated learning (FL), differential privacy, and combinations of strategies to enhance the FL. Challenges in privacy comprehend the development of attacks that bypass fine-tuning, defenses to calibrate models to improve their robustness, and privacy methods to enhance the FL strategy.
Conclusions: In conclusion, it is critical to explore security and privacy in ML for health, because it has grown risks and open vulnerabilities. Our study presents strategies and challenges to guide research to investigate issues about security and privacy in ML applied to health systems.