Continual learning approaches are useful as they help the model to learn new information (classes) sequentially, while also retaining the previously acquired information (classes). However, it has been shown that such approaches are extremely vulnerable to the adversarial backdoor attacks, where an intelligent adversary can introduce small amount of misinformation to the model in the form of imperceptible backdoor pattern during training to cause deliberate forgetting of a specific task or class at test time. In this work, we propose a novel defensive framework to counter such an insidious attack where, we use the attacker's primary strengthhiding the backdoor pattern by making it imperceptible to humans -against it, and propose to learn a perceptible (stronger) pattern (also during the training) that can overpower the attacker's imperceptible (weaker) pattern. We demonstrate the effectiveness of the proposed defensive mechanism through various commonly used replay-based (both generative and exact replay-based) continual learning algorithms using continual learning benchmark variants of CIFAR-10, CIFAR-100, and MNIST datasets. Most noteworthy, we show that our proposed defensive framework considerably improves the robustness of continual learning algorithms with no knowledge of the attacker's target task, attacker's target class, shape, size, and location of the attacker's pattern. Moreover, our defensive framework does not depend on the underlying continual learning algorithm and does not rely on detecting the attack samples and subsequently removing them from further consideration but it, instead, attempts to correctly classify even the attack samples and thus ensuring robustness in continual learning models. We term our defensive framework as Adversary Aware Continual Learning (AACL).INDEX TERMS Continual (incremental) learning, misinformation, false memory, backdoor attack, poisoning.