With the increasing cloud-trend of high-performance computing (HPC), more users submit their applications simultaneously to the platform and wish they could finish before the deadline. Moreover, due to the severe holistic performance degradation caused by I/O contention, a deadline-sensitive I/O scheduler is needed to allocate storage resources according to the requirements of applications and resultantly guarantee the quality of service (QoS) of concurrently running applications. In this paper, we first explore the bandwidth allocation phenomenon caused by interference in applications through the modeling of historical data, and then we quote a metric called random percentage that can represent the random degree of the applications and be used to guide I/O scheduling in the later stage. We design a dynamic I/O scheduler named DDL-QoS that uses solid state drives(SSDs) as QoS guarantee to minimize interference and ensure applications meet their deadline. The potential of our design is that the greater the I/O interference, the greater the performance improvement, but this performance improvement will be limited by the physical properties of the storage hardware. KEYWORDS Deadline, I/O Scheduler, QoS 1 INTRODUCTION Nowadays, high-performance computing (HPC) systems are beginning to enter the exascale era. 1 IBM Summit, the fastest supercomputer in the world used at Oak Ridge National Laboratory is capable of 200 PetaFlops. 2 Sunway TaihuLight, the fastest supercomputer in China, has a peak performance of 120 PetaFlops. 3 The explosive growth of computing power requires the underlying parallel file system provides higher performance. At the same time, more and more data-intensive applications run on a large-scale in high-performance computing systems, resulting in an increasing demand for capable storage systems. Besides, some of the parallel file systems commonly used in HPC systems, such as Lustre, 4 GPFS, 5 OrangeFS 6 , etc, are starting to face significant challenges in terms of performance, complexity, and so on. 7 At the same time, HPC is migrating to the cloud as more and more HPC users begin to look to the cloud to help solve their workload challenges, and many public cloud companies have launched HPC products, such as Amazon Web Services, 8 Alibaba Cloud Computing, 9 etc. It means that storage resources in HPC system are shared between more and more different users. With the development of scientific applications, the scale of computing is growing. More and more storage resources are needed. Limited storage resources need to serve more applications. When multiple applications access the storage service concurrently, they will compete for I/O resources, which will lead to a serious drop in I/O aggregate bandwidth. 10 In addition, their I/O requests with different I/O access modes mixed. The hard disk drives (HDDs) can handle the requests with continuous mode more efficiently, while the requests with random mode will cause more seek overhead resulting in overall performance degradation. 11 We call this I/O i...