Proceedings of the 36th ACM International Conference on Supercomputing 2022
DOI: 10.1145/3524059.3532378
|View full text |Cite
|
Sign up to set email alerts
|

Towards low-latency I/O services for mixed workloads using ultra-low latency SSDs

Abstract: Low-latency I/O services are essential for latency-sensitive workloads when they co-run with throughput-oriented workloads in cloud data centers. Although advanced SSDs such as Intel Optane SSDs can offer ultra-low latency at the device layer, I/O interference among various workloads through the I/O stack can still significantly enlarge I/O latency. It is still an open problem to best utilize ultra-low latency SSDs in cloud computing environments.In this paper, we analyze the entire I/O stack and reveal that I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…These approaches primarily address differences in memory access times incurred by interconnects between processors, e.g., NThread [40] and AASH [42] proposed thread migration as a means of avoiding contention in processor interconnects. There have been efforts directed toward load balancing in storage systems [44][45][46][47][48]. FastResponse [45] proposed several schemes applicable across the Linux storage stack to mitigate I/O interference between co-running low-latency I/O services.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These approaches primarily address differences in memory access times incurred by interconnects between processors, e.g., NThread [40] and AASH [42] proposed thread migration as a means of avoiding contention in processor interconnects. There have been efforts directed toward load balancing in storage systems [44][45][46][47][48]. FastResponse [45] proposed several schemes applicable across the Linux storage stack to mitigate I/O interference between co-running low-latency I/O services.…”
Section: Related Workmentioning
confidence: 99%
“…There have been efforts directed toward load balancing in storage systems [44][45][46][47][48]. FastResponse [45] proposed several schemes applicable across the Linux storage stack to mitigate I/O interference between co-running low-latency I/O services. The blk-switch [44] achieved low latency and high bandwidth in the block storage layer by adopting a multi-queue design with load balancing and scheduling techniques.…”
Section: Related Workmentioning
confidence: 99%
“…Previous works [100], [241], [260], [261], [262], [263], [264], [265] propose several techniques to mitigate block I/O latencies for fast NVMe devices. These techniques include software [100], [261], [262], [263], [264], [265] and hardware solutions [241], [260] to provide lower I/O access latency [100], [263], [264], page fault handling [260], and I/O scheduling [241], [261], [265].…”
Section: ) Improving Block I/o Latency For Fast Nvme Devicesmentioning
confidence: 99%
“…Previous works [100], [241], [260], [261], [262], [263], [264], [265] propose several techniques to mitigate block I/O latencies for fast NVMe devices. These techniques include software [100], [261], [262], [263], [264], [265] and hardware solutions [241], [260] to provide lower I/O access latency [100], [263], [264], page fault handling [260], and I/O scheduling [241], [261], [265]. Even though these techniques are promising solutions to reduce the high block I/O latencies, they require substantial changes in the hardware and the software stack, which are outside the scope of this work, but can also be used in our proposed system.…”
Section: ) Improving Block I/o Latency For Fast Nvme Devicesmentioning
confidence: 99%