40th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 2007) 2007
DOI: 10.1109/micro.2007.21
|View full text |Cite
|
Sign up to set email alerts
|

Stall-Time Fair Memory Access Scheduling for Chip Multiprocessors

Abstract: DRAM memory is a major resource shared among cores in a chip multiprocessor (CMP) system. Memory requests from different threads can interfere with each other. Existing memory access scheduling techniques try to optimize the overall data throughput obtained from the DRAM and thus do not take into account inter-thread interference. Therefore, different threads running together on the same chip can experience extremely different memory system performance: one thread can experience a severe slowdown or starvation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
171
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 289 publications
(171 citation statements)
references
References 21 publications
0
171
0
Order By: Relevance
“…QoS-aware memory controllers were proposed in various contexts including a packet memory environment [1] and multi-processor environments [2,3,4,5]. In [1], the proposed adaptive feedback mechanism dynamically adjusts allocated bandwidths to different classes based on latency violations.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…QoS-aware memory controllers were proposed in various contexts including a packet memory environment [1] and multi-processor environments [2,3,4,5]. In [1], the proposed adaptive feedback mechanism dynamically adjusts allocated bandwidths to different classes based on latency violations.…”
Section: Related Workmentioning
confidence: 99%
“…In [1], the proposed adaptive feedback mechanism dynamically adjusts allocated bandwidths to different classes based on latency violations. In [2,3], a fair queueing method is employed to allocate bandwidth for different processor threads whereas in [4,5], priority scheduling is used to schedule threads based on their sensitivity to inter-thread interference, latency, or bandwidth.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations