Deep learning suffers from backdoor poisoning attacks. By injecting only a few poison samples into a training set, adversaries can easily embed stealthy backdoor into the trained models. In this work, we study poison samples detection for defending against backdoor poisoning attacks on deep neural networks (DNNs). A principled idea underlying prior arts on this problem is to utilize the backdoored models' distinguishable behaviors on poison and clean populations to distinguish between these two different populations themselves and remove the identified poison. Typically, many prior arts build their detectors upon a latent separability assumption, which states that backdoored models trained on the poisoned dataset will learn separable latent representations for backdoor and clean samples. Although such separation behaviors empirically exist for many existing attacks, there is no control on the separability and the extent of separation can vary a lot across different poison strategies, datasets, as well as the training configurations of backdoored models. Worse still, recent adaptive poison strategies can greatly reduce the "distinguishable behaviors" and consequently render most prior arts less effective (or completely fail). We point out that these limitations directly come from the passive reliance on some distinguishable behaviors that are not controlled by defenders. To mitigate such limitations, in this work, we propose the idea of active defense -rather than passively assuming backdoored models will have certain distinguishable behaviors on poison and clean samples, we propose to actively enforce the trained models to behave differently on these two different populations. Specifically, we introduce confusion training as a concrete instance of active defense. Confusion training separates poison and clean populations by introducing another poisoning attack to the already poisoned dataset, which actively decouples the benign correlations and leave backdoor correlations the only learnable patterns -consequently, only backdoor poison samples can be fitted, while clean samples are underfitted. In short, we literally invite a "defensive poison" in to fight the original backdoor poison we aim to cleanse. By extensive evaluations on both CIFAR10 and GTSRB, we show superiority of active defense across a diverse set of backdoor poisoning attacks.Preprint. Under review.