“…Backdoor attacks can be implemented in several ways, such as by modifying the victim network directly [Gu et al, 2017;Zhang et al, 2021], contaminating the pre-trained network used by the victim [Kurita et al, 2020;Gu et al, 2017], poisoning the training dataset [Yang et al, 2017], or even modifying the training process or loss function [Bagdasaryan and Shmatikov, 2021]. In some cases, a combination of these methods may be used, such as in [Qi et al, 2021], where the poisoned training set and network weights are learned together. A comprehensive review of backdoor attacks against neural networks can be found in [Li et al, 2022].…”