Intel SGX has attracted much attention from academia and is already powering commercial applications. Cloud providers have also started implementing SGX in their cloud offerings. Research efforts on Intel SGX so far have mainly concentrated on its security and programmability. However, no work has studied in detail the performance degradation caused by SGX in virtualized systems. Such settings are particularly important, considering that virtualization is the de facto building block of cloud infrastructure, yet often comes with a performance impact. This paper presents for the first time a detailed performance analysis of Intel SGX in a virtualized system in comparison with a bare-metal system. Based on our findings, we identify several optimization strategies that would improve the performance of Intel SGX on such systems.
Intel SGX has attracted much attention from academia and is already powering commercial applications. Cloud providers have also started implementing SGX in their cloud offerings. Research efforts on Intel SGX so far have mainly concentrated on its security and programmability. However, no work has studied in detail the performance degradation caused by SGX in virtualized systems. Such settings are particularly important, considering that virtualization is the de facto building block of cloud infrastructure, yet often comes with a performance impact. This paper presents for the first time a detailed performance analysis of Intel SGX in a virtualized system in comparison with a bare-metal system. Based on our findings, we identify several optimization strategies that would improve the performance of Intel SGX on such systems.
Nowadays, virtualization is a central element in data centers as it allows sharing server resources among multiple users across virtual machines (VM). These servers often follow a Non-Uniform Memory Access (NUMA) architecture, consisting of independent nodes with their own cache hierarchies and I/O controllers. In this work, we investigate the impact of such an architecture on network access.As network devices are typically connected to one particular NUMA node, this leads to a situation where device access on one node is faster than another. This phenomenon is called Non-Uniform I/O Access (NUIOA). This non-uniformity impacts the performance of I/O applications that are not executed on the correct NUMA node.In this paper, we are interested in NUIOA effects in virtualized environments. Our contribution in this work is twofold: 1) we thoroughly study the impact of NUIOA on application performance in VMs, and 2) we propose a resource allocation strategy for VMs that reduces the impact of NUIOA. We implemented our allocation strategy on the Xen hypervisor and carried out evaluations with well-known benchmarks to validate our strategy. The obtained results show that with our NUIOA allocation scheme, we can improve the performance of application in VMs by up to 20% compared to common allocation strategies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.