2012
DOI: 10.1587/transinf.e95.d.2377
|View full text |Cite
|
Sign up to set email alerts
|

Cache-Aware Virtual Machine Scheduling on Multi-Core Architecture

Abstract: SUMMARYFacing practical limits to increasing processor frequencies, manufacturers have resorted to multi-core designs in their commercial products. In multi-core implementations, cores in a physical package share the last-level caches to improve inter-core communication. To efficiently exploit this facility, operating systems must employ cache-aware schedulers. Unfortunately, virtualization software, which is a foundation technology of cloud computing, is not yet cache-aware or does not fully exploit the local… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…To hide the latency of the main memory access, hierarchical cache subsystems have been proposed and utilized on multi-core systems [23], [24]. Network processors designed for high speed PPAs always rely extremely on cache technologies owing to the restricted CPU time budget per packet [25], [26].…”
Section: Stage Processing Time Model For Mplmentioning
confidence: 99%
See 1 more Smart Citation
“…To hide the latency of the main memory access, hierarchical cache subsystems have been proposed and utilized on multi-core systems [23], [24]. Network processors designed for high speed PPAs always rely extremely on cache technologies owing to the restricted CPU time budget per packet [25], [26].…”
Section: Stage Processing Time Model For Mplmentioning
confidence: 99%
“…In addition, the most suitable value of the parameter K can be estimated as: (24) If K < N pproc , I-Cache affinity is revealed but cores still need to process all N paccept protocols. As K ≥ N pproc , the packets of one protocol will be 'locked' on the same core in the stable state.…”
Section: Algorithm 1 Paps Algorithmmentioning
confidence: 99%
“…In order to justify their argument, they first traced a production VDI workload and found that caching below the individual VMs is effective to improve I/O performance. Capo was integrated with XenServer [28], by putting it into domain 0. Also.…”
Section: Related Studiesmentioning
confidence: 99%
“…According to the proportional-share scheduling model developed in our previous research [5], the completion time of the parallel application is as follows: The completion time then increases with Lag. This means that as the VM receives less time from the CPU at any moment than what it ideally should, the performance degrades further.…”
Section: Performance Of Concurrent Vmsmentioning
confidence: 99%