“…5 Due to their practical relevance, various scientific articles have been published on training-time attacks against ML models. While the vast majority of the poisoning literature focuses on supervised classification models in the computer vision domain, we would like to remark here that data poisoning has been investigated earlier in cybersecurity [125,133], and more recently also in other application domains, like audio [1,90] and natural language processing [34,205], and against different learning methods, such as federated learning [4,189], unsupervised learning [17,41], and reinforcement learning [10,204]. 1 https://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot 2 https://www.khaleejtimes.com/technology/ai-getting-out-of-hand-chinese-chatbots-re-educated-after-rogue-rants 3 https://www.vice.com/en/article/akd4g5/ai-chatbot-shut-down-after-learning-to-talk-like-a-racist-asshole 4 http://www.nickdiakopoulos.com/2013/08/06/algorithmic-defamation-the-case-of-the-shameless-autocomplete/ 5 https://www.timebulletin.com/jewish-baby-stroller-image-algorithm/ Within this survey paper, we provide a comprehensive framework for threat modeling of poisoning attacks and categorization of defenses.…”