2013
DOI: 10.1145/2508148.2485936
|View full text |Cite
|
Sign up to set email alerts
|

Utility-based acceleration of multithreaded applications on asymmetric CMPs

Abstract: Asymmetric Chip Multiprocessors (ACMPs) are becoming a reality. ACMPs can speed up parallel applications if they can identify and accelerate code segments that are critical for performance. Proposals already exist for using coarsegrained thread scheduling and fine-grained bottleneck acceleration. Unfortunately, there have been no proposals offered thus far to decide which code segments to accelerate in cases where both coarse-grained thread scheduling and fine-grained bottleneck acceleration could have value. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
45
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(45 citation statements)
references
References 23 publications
0
45
0
Order By: Relevance
“…In other words, it alternately accelerates threads on fast cores till all threads hit the barrier. Likewise, UBA [53] traces the bottleneck of slow threads include locks, barriers, pipeline, and serial section. It alternatively accelerates the bottlenecks of different applications on fast cores.…”
Section: Performance Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, it alternately accelerates threads on fast cores till all threads hit the barrier. Likewise, UBA [53] traces the bottleneck of slow threads include locks, barriers, pipeline, and serial section. It alternatively accelerates the bottlenecks of different applications on fast cores.…”
Section: Performance Comparisonmentioning
confidence: 99%
“…Prior scheduling methods focus on how to use the fast core to boost the performance of multi-threaded applications [52] [53]. CAMP (comprehensive scheduler for asymmetric multi-core processor) adopts BusyFCs strategy keeping the fast cores busy.…”
mentioning
confidence: 99%
“…Although fairness-aware scheduling strives to achieve fairness by running each logical core thread on each physical core type for an equal amount of time, the KUTHS scheduling instead tries to enhance these scheduling benefits by discovering and running the critical code sections on the larger cores. The difference between KUTHS and the state-of-the-art bottleneck acceleration techniques found in bottleneck identification and scheduling (BIS) 10 and utility-based acceleration (UBA) 11 is that KUTHS does not require any ISA extensions that impact code reusability.…”
Section: Related Work In Scheduling Techniquesmentioning
confidence: 99%
“…KUTHS has hardware additions, including a transition bit on every core and a vector that holds all of the transition bits. However, in contrast to the low additional overhead needed by KUTHS, UBA requires the lagging thread identification, bottleneck identification, and acceleration coordination, 11 whereas BIS requires a bottleneck table whose entries correspond to a bottleneck, an acceleration index table augmented with each small core, and a scheduling buffer added to each large core. 10 KUTHS algorithm extension for many-core systems Now that we've explained the essence of the KUTHS algorithm, it is important to note the results in Table 1 before applying KUTHS on a many-core processor.…”
Section: Related Work In Scheduling Techniquesmentioning
confidence: 99%
See 1 more Smart Citation