It is important to build robust object detector (ROD) in real-world applications because snow, rain, fog, motion blur, and various kinds of corruption can occur in autonomous-driving environments. Adversarial training (AT) is one of the best solutions to build a robust deep neural network. However, applying AT has a risk of sacrificing clean performance, even though robustness is improved, which is called catastrophic forgetting (CF). In particular, CF in an autonomous-driving environment is more challenging for two reasons. The first is CF is worsened due to various types of corruption. The second is the degradation of clean performance can lead to a risk of overall performance degradation because more than 60% of the total data is clean (based on Bdd100k). Therefore, we propose an ROD framework to ensure not only robustness against corruption but also prevent degradation of clean performance, despite the two aforementioned difficulties. The ROD framework utilizes a training methodology with an adversarial defense module (ADM) based on the intermediate representative feature (IRF) concept. This framework can improve robustness without CF under multi-corruption environments. In this paper, we report on three main achievements. The first is that the mean performance under corruption (mPC) of RetinaNet was improved by 32.14% with an mAP degradation of only 0.2%, based on COCO 2017. The second is that our method achieved state-of-the art results with 86.8% relative performance under corruption (rPC) compared to Hybrid Task Cascades with 64.6% rPC on the ROD benchmark. The third is that our ROD methodology achieved 32.29% and 31.54% mPC at 15-type seen corruption and four-type unseen corruption, respectively. The ROD framework is also applied to an autonomous-driving domain showing that it operates well, even under harsh environments in the Bdd100k dataset.