Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years. The proposed attacks aimed solely at compromising the models' integrity (i.e., trustworthiness of the model's prediction), while adversarial attacks targeting the models' availability, a critical aspect in safety-critical domains such as autonomous driving, have not been explored by the machine learning research community. In this paper, we propose NMS-Sponge, a novel approach that negatively affects the decision latency of YOLO, a state-ofthe-art object detector, and compromises the model's availability by applying a universal adversarial perturbation (UAP). In our experiments, we demonstrate that the proposed UAP is able to increase the processing time of individual frames by adding "phantom" objects while preserving the detection of the original objects.Recently, availability-based attacks have been shown to be effective against deep learning-based models. Shumailov et al. [20] presented sponge examples, which are perturbed inputs designed to increase the energy consumed by natural language processing (NLP) and computer vision models, when deployed on hardware accelerators, by increasing the number of active neurons during classification. Following this work, other studies have proposed sponge-like attacks, mainly targeting image classification models [4,3,6,10].Preprint. Under review.