2018
DOI: 10.1007/s10586-018-2828-1
|View full text |Cite
|
Sign up to set email alerts
|

OMBM: optimized memory bandwidth management for ensuring QoS and high server utilization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Challenge 3-Reliability in critical applications: In order to support mission-critical applications in the IoV (e.g., first responder communication and transportation systems), services with low communication latency and high reliability are required [47][48][49]. These latency-sensitive applications have propagation lengths that range from short to medium.…”
Section: Challenges Of Iov-assisted Smart Gridmentioning
confidence: 99%
“…Challenge 3-Reliability in critical applications: In order to support mission-critical applications in the IoV (e.g., first responder communication and transportation systems), services with low communication latency and high reliability are required [47][48][49]. These latency-sensitive applications have propagation lengths that range from short to medium.…”
Section: Challenges Of Iov-assisted Smart Gridmentioning
confidence: 99%
“…Contention for lower levels of cache and With compute-memory performance gap, allocating larger cache slices and higher memory bandwidth significantly enhances performance bound QoS metrics (Sung et al, 2017). Concurrent applications with diverse memory access patterns, compute-memory phases and intensities and cache utilization is a major reason for application slow down and QoS degradation (Tang et al, 2011;Subramanian et al, 2015).…”
Section: Memorymentioning
confidence: 99%
“…Cache partitioning to provide either larger or at least sufficient enough last level cache slices is a common approach to meet QoS requirements of latency critical applications (Kasture and Sanchez, 2014;Iyer et al, 2007). Identifying application/thread priority and scaling cache allocation accordingly, following utilitarian principles is another strategy to improve overall throughput metrics (Herdrich et al, 2016;Sharifi et al, 2011;Sung et al, 2017;Guo et al, 2007a). Optimizing for memory controller proximity reduces shared resource pressure further and provides predictable QoS guarantees (Beckmann et al, 2015;Subramanian et al, 2015).…”
Section: Memorymentioning
confidence: 99%
“…All the provisioning techniques prioritize applications based on QoS requirements and dynamically adapt further by monitoring resource utilization upon provisioning. Memory and Storage: With the widening compute-memory performance gap, allocating larger cache slices and higher memory bandwidth significantly enhances performance bound QoS metrics [36] [37] [38]. Using cache partitioning to provide either larger/sufficient cache slices is a common approach to meet QoS requirements of latency critical applications [39] [40].…”
Section: Qosmentioning
confidence: 99%
“…Using cache partitioning to provide either larger/sufficient cache slices is a common approach to meet QoS requirements of latency critical applications [39] [40]. Identifying application/thread priority and scaling cache allocation accordingly, following utilitarian principles is another strategy to improve overall throughput metrics [41] [42] [36]. Optimizing for memory controller proximity [43] [38] and allocating higher bandwidth can enhance QoS of memory intensive applications [44] [45].…”
Section: Qosmentioning
confidence: 99%