As high‐performance computing (HPC) becomes a tool used in many different workflows, quality of service (QoS) becomes increasingly important. In many cases, this includes the reliable execution of an HPC job and the generation of the results by a certain deadline. The resource and job management system (RJMS) or simply RMS is responsible for receiving the job requests and executing the jobs with a deadline‐oriented policy to support the workflows. In this article, we evaluate how well static resource management policies cope with deadline‐constrained HPC jobs and explore two variations of a dynamic policy in this context. As the Hilbert curve‐based approach used by the SLURM workload manager represents the state‐of‐the‐art in production environments, it was selected as one of the static allocation strategies. The Manhattan median approach as a second allocation strategy was introduced as a research work that aims to minimize the communication overhead of the parallel programs by providing compact partitions more than the Hilbert curve approach. In contrast to the static partitions provided by the Hilbert curve approach and the Manhattan median approach, the leak approach focuses on supporting dynamic runtime behavior of the jobs and assigning nodes of the HPC system on demand at runtime. Since the contiguous leak version also relies on a compact set of nodes, the noncontiguous leak can provide additional nodes at a greater distance from the nodes already used by the job. Our preliminary results clearly show that a dynamic policy is needed to meet the requirements of a modern deadline‐oriented RMS scenario.