2011
DOI: 10.1007/s10766-011-0189-y
|View full text |Cite
|
Sign up to set email alerts
|

TMT: A TLB Tag Management Framework for Virtualized Platforms

Abstract: Virtualization is a convenient way to efficiently utilize the numerous on-chip resources in modern physical platforms. However, it is important to ensure a high performance for the workloads running on such virtualized platforms. One factor which reduces the performance of these virtualized workloads is the frequent flushing of hardware-managed Translation Lookaside Buffers (TLBs). To avoid these flushes and reduce the TLB miss rate, we propose the Tag Manager Table (TMT), a hardware architecture for generatin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…Wiggins et al [2003] propose a software method to maintain process ID for TLB entry. Venkatasubramanian et al [2009] show that TLB management overhead is one of the most significant performance bottlenecks in multicore-based VMs and recommend separate tagged TLB for individual virtual platforms.…”
Section: Related Workmentioning
confidence: 97%
“…Wiggins et al [2003] propose a software method to maintain process ID for TLB entry. Venkatasubramanian et al [2009] show that TLB management overhead is one of the most significant performance bottlenecks in multicore-based VMs and recommend separate tagged TLB for individual virtual platforms.…”
Section: Related Workmentioning
confidence: 97%
“…Venkatasubramanian et al note that workload consolidation through virtualization increases the number of distinct VA spaces and the context switche frequency between them. Because context switch requires flushing the TLB, virtualization can increase TLB flushes by 10 × and miss rate of DTLB and ITLB by 5 × and 70 × , respectively.…”
Section: Techniques For Improving Tlb Coverage and Performancementioning
confidence: 99%
“…Similarly, virtual caches can be used to reduce TLB accesses. TLB leakage energy can be reduced by using reconfiguration and using non‐volatile (ie, low‐leakage) memory to design TLB TLB miss‐rate can be lowered by increasing TLB reach (eg, by using superpages or variable page‐size), by using prefetching, software caching, TLB partitioning and reducing flushing overhead (Table ). Some techniques use tags (eg, ASID) to remove homonyms and/or reduce flushing overhead Because stack and heap and global‐static, and private and shared data show different characteristics, utilizing this semantic information and page classification information (respectively) allows design of effective management techniques (Table ). Because ITLB miss‐rate is generally much lower than that of DTLB, some researchers focus only on DTLB .…”
Section: Background and Overviewmentioning
confidence: 99%
“…Tickoo et al [12] and Venkatasubramanian et al [13] have investigated TLB tagging with domain 1 -specific and processspecific tags. However, the limitation of these studies is that the metric used for evaluating the impact of TLB tagging is miss rate and not timing-based metrics.…”
Section: Related Workmentioning
confidence: 99%
“…Beyond 256 entries, the dominant cause for TLB misses is the repeated flushing of the TLB and not TLB size limitations. From earlier work [13], it is known that the ITLB and DTLB miss rates for TPCC-UVa reduce on scaling up the TLB size till 256 entries and remain constant for larger sizes. It can be seen from Figure 4a, that a similar trend is exhibited in the TLB R IP C values.…”
Section: A Effect Of Virtualization On Tlb's Impact On Workload Perfmentioning
confidence: 99%