2018
DOI: 10.1145/3239563
|View full text |Cite
|
Sign up to set email alerts
|

Performance Characterization of NVMe-over-Fabrics Storage Disaggregation

Abstract: Storage disaggregation separates compute and storage to different nodes to allow for independent resource scaling and, thus, better hardware resource utilization. While disaggregation of hard-drives storage is a common practice, NVMe-SSD (i.e., PCIe-based SSD) disaggregation is considered more challenging. This is because SSDs are significantly faster than hard drives, so the latency overheads (due to both network and CPU processing) as well as the extra compute cycles needed for the offloading stack become mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 19 publications
0
8
0
Order By: Relevance
“…To reduce the network latency for data replication, the NVMe over Fabrics protocol is a promising technology. Guz et al [13] applied this protocol to the storage disaggregation method. They showed that there is no significant difference between local and remote storages when using NVMe over Fabrics protocol for storage disaggregation, but they did not apply it to data replication.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To reduce the network latency for data replication, the NVMe over Fabrics protocol is a promising technology. Guz et al [13] applied this protocol to the storage disaggregation method. They showed that there is no significant difference between local and remote storages when using NVMe over Fabrics protocol for storage disaggregation, but they did not apply it to data replication.…”
Section: Related Workmentioning
confidence: 99%
“…The NVMe over Fabrics protocol provides low-latency remote I/O by using RDMA over converged Ethernet [12]. The overhead of the NVMe over Fabrics protocol was reported to be 11.7 µs in [13].…”
Section: Introductionmentioning
confidence: 99%
“…Guz et al [9] evaluated NVMeoF performance and compared it with locally attached NVMe using synthetic and KV-store workload. The study is further extended [11] by assessing the overhead of using SPDK compared to the Linux kernel implementation. Xu et al [10] evaluated the performance of overlay file systems with NVMeoF.…”
Section: Related Workmentioning
confidence: 99%
“…Previous studies were characterizing the performance of NVMeoF in different aspects [9][10][11]; however, these studies are limited to studying NVMeoF in LAN. Furthermore, there is no previous study on characterizing the performance of NVMe-TCP in MANs and WANs.…”
Section: Introductionmentioning
confidence: 99%
“…NVMe reduces the storage access method to memory access and block access. NVMe status can replace traditional storage access methods, which can be implemented in different interconnects through data communication (NVMe over PCIe, NVMe over Fabrics, NVMe over Ethernet, NVMe over InfiniBand, RDMA offload)[23,[29][30][31]. This is also based on high-speed Ethernet environments to access resources and storage through Ethernet (RDMA over Converged Ethernet (RoCE))[23].…”
mentioning
confidence: 99%