2020
DOI: 10.1007/978-3-030-59851-8_21
|View full text |Cite
|
Sign up to set email alerts
|

Interference-Aware Orchestration in Kubernetes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…The heterogeneous Hardware infrastructure is integrated into a single execution environment using Kubernetes [40], [41] with The Skynet [42], Volcano [43], and the interferenceaware custom scheduler [44] extensions. For the unified storage layer EVOLVE uses open-source software frameworks such as Karvdash [45], Datashim [46], and H3 [47].…”
Section: Big Data and High-performance Computing Platform Evolvementioning
confidence: 99%
“…The heterogeneous Hardware infrastructure is integrated into a single execution environment using Kubernetes [40], [41] with The Skynet [42], Volcano [43], and the interferenceaware custom scheduler [44] extensions. For the unified storage layer EVOLVE uses open-source software frameworks such as Karvdash [45], Datashim [46], and H3 [47].…”
Section: Big Data and High-performance Computing Platform Evolvementioning
confidence: 99%
“…The rise of "cloud-native" platforms, such as Kubernetes, that facilitate the deployment of applications on lightweight containers and expand their capacity to dynamically scale resources, have provided grounds for the study of their impact on the performance of applications. 8,12 A portion of prior scientific works propose novel Kubernetes schedulers to optimize the placement of incoming containerized applications on the underlying cluster, [20][21][22][23] either by minimizing the overall energy utilization of cloud/edge nodes, 20 or by attempting to reduce the interference between co-located workloads either on the system or the network level. [21][22][23] Moreover, except for Kubernetes-based resource management approaches, other scientific papers propose custom solutions able to efficiently place workloads on edge nodes.…”
Section: Related Workmentioning
confidence: 99%
“…8,12 A portion of prior scientific works propose novel Kubernetes schedulers to optimize the placement of incoming containerized applications on the underlying cluster, [20][21][22][23] either by minimizing the overall energy utilization of cloud/edge nodes, 20 or by attempting to reduce the interference between co-located workloads either on the system or the network level. [21][22][23] Moreover, except for Kubernetes-based resource management approaches, other scientific papers propose custom solutions able to efficiently place workloads on edge nodes. In Amit Samanta and Jianhua Tang's "Dyme," the authors propose a dynamic microservice scheduling scheme for mobile edge computing, aiming to reduce energy consumption while providing fair Quality-of-Service (QoS) among all devices.…”
Section: Related Workmentioning
confidence: 99%
“…Another crucial aspect focuses on the real-time utilization of node resources to schedule workloads [18]. This load-awareness is especially important in multi-tenancy cases [19], as interference effects such as cache misses and CPU context switches may lead to performance degradation. There are also papers which try to combine several of those aspects and propose a weighted multi-criteria decision strategy with the goal to optimize workload placement [20,21].…”
Section: Related Workmentioning
confidence: 99%