2019
DOI: 10.1007/s10766-019-00637-y
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Thread Scheduling in Chip Multiprocessors

Abstract: The full potential of chip multiprocessors remains unexploited due to architecture oblivious thread schedulers employed in operating systems. We introduce an adaptive cache-hierarchy-aware scheduler that tries to schedule threads in a way that interthread contention is minimized. A novel multi-metric scoring scheme is used which specifies L1 cache access characteristics of threads. Scheduling decisions are made based on these multi-metric scores of threads.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 27 publications
0
9
0
2
Order By: Relevance
“…Akturk and Ozturk 6 also proposed similar solutions: they calculate the inter‐thread contention using scores not available in current processors. Another drawback of their mechanism is that it considers only inter‐thread cache contention suitable for memory‐bound applications.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Akturk and Ozturk 6 also proposed similar solutions: they calculate the inter‐thread contention using scores not available in current processors. Another drawback of their mechanism is that it considers only inter‐thread cache contention suitable for memory‐bound applications.…”
Section: Discussionmentioning
confidence: 99%
“…Akturk and Ozturk 6 propose a cache‐hierarchy‐aware scheduler for multiple sequential applications, which balances the number of accesses to the L1 cache, reducing the number of evictions on shared caches which eventually limits the performance. Akturk and Ozturk work is very similar to Settle et al; 11 the main differences are that Akturk considers only L1 cache while Settle correlated different cache levels and metrics with the IPC of several applications.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Diferente deles, também consideramos unidades de execução, como branch, inteiro e ponto flutuante. Akturk et al [Akturk and Ozturk 2019] propõem um escalonador que mapeia threads de uma forma que minimiza o número de acessos a cache L1 e reduz o número de evictions em caches compartilhadas que eventualmente limitam o desempenho. Eles melhoram o desempenho do benchmark PARSEC [Bienia 2011] em até 12,6%.…”
Section: Trabalhos Relacionadosunclassified
“…Pesquisas anteriores identificaram a contenção na memória principal e nas memórias cache dos processadores SMT como um dos principais gargalos de desempenho [Akturk and Ozturk 2019, Choi and Yeung 2009, Cruz et al 2018, Feliu et al 2016, Serpa et al 2019a, Serpa et al 2019b]. Assim, utilizando múltiplas aplicações sequenciais, os autores propõem técnicas para mitigar esses efeitos, melhorando o desempenho.…”
Section: Introductionunclassified