“…Different from adversarial attacks which usually act during the inference process of a neural model [17,38,49,53,63,63,66,74,84,85], backdoor attacks hack the model during training [10,22,40,51,61,62,75,82]. Defending against such attacks is challenging [8,23,37,41,57,73] because users have no idea of what kinds of poison has been injected into model training.…”