“…Backdoor detection (Huang et al, 2020;Harikumar et al, 2020;Kwon, 2020;Zhang et al, 2020;Erichson et al, 2020) methods or backdoor mitigation methods (Yao et al, 2019;Zhao et al, 2020;Liu et al, 2018a) can be utilized to defend against backdoors. Backdoor detection methods usually identify the existence of backdoors in the model, via the responses of the model to input noises (Erichson et al, 2020) or universal adversarial perturbations (Zhang et al, 2020). Typical backdoor mitigation methods mitigate backdoors via fine-tuning, including direct fine-tuning models (Yao et al, 2019), fine-tuning after pruning (Liu et al, 2018a), and fine-tuning guided by knowledge distillation .…”