In the autonomous driving environment, object instances in an image can be affected by various factors such as camera, driving state, weather, and system component. However, the deep learning-based vision systems are vulnerable to perturbation, which contains noise. Thus, robust object detection under harsh autonomous-driving environments is a more difficult than the generic situation. In this paper, it is found that not only the accuracy, but also the speed of the non-maximum suppression-based detector can be degraded under harsh environments. Therefore, object detection is handled under a harsh situation with adversarial mechanisms such as adversarial training and adversarial defence. Adversarial defence modules are designed to improve robustness in feature extraction level and define perturbations under a harsh environment for training object detectors to improve the robustness of the model's decision boundary. The proposed adversarial defence and training mechanisms improve the object detector in both accuracy and speed. The proposed method shows a 43.7% mean average precision for the COCO2015 dataset in generic object detection and 39.0% mean average precision for the BDD100K dataset in a driving environment. Furthermore, it achieves a real-time capability of 23 frames per second. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.