Advances in speech synthesis have exposed the vulnerability of spoofing countermeasure (CM) systems. Adversarial attacks exacerbate this problem, mainly due to the reliance of most CM models on deep neural networks. While research on adversarial attacks in anti-spoofing systems has received considerable attention, there is a relative scarcity of studies focused on developing effective defense techniques. In this study, we propose a defense strategy against such attacks by augmenting training data with frequency bandpass filtering and denoising. Our approach aims to limit the impact of perturbation, thereby reducing the susceptibility to adversarial samples. Furthermore, our findings reveal that the use of Max-Feature-Map (MFM) and frequency band-pass filtering provides additional benefits in suppressing different noise types. To empirically validate this hypothesis, we conduct tests on different CM models using adversarial samples derived from the ASVspoof challenge and other well-known datasets. The evaluation results show that such defense mechanisms can potentially enhance the performance of spoofing countermeasure systems.