Proceedings of the Fifteenth International Conference on Architectural Support for Programming Languages and Operating Systems 2010
DOI: 10.1145/1736020.1736035
|View full text |Cite
|
Sign up to set email alerts
|

Decoupling contention management from scheduling

Abstract: Many parallel applications exhibit unpredictable communication between threads, leading to contention for shared objects. The choice of contention management strategy impacts strongly the performance and scalability of these applications: spinning provides maximum performance but wastes significant processor resources, while blocking-based approaches conserve processor resources but introduce high overheads on the critical path of computation. Under situations of high or changing load, the operating system com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(34 citation statements)
references
References 21 publications
0
34
0
Order By: Relevance
“…To have bounded contention and reduce overheads due to creation and retiring of the threads, it is recommended to maintain a thread pool catering to the functions rather than having each thread for a user/connection. The design of the thread pool and allocating a thread for incoming requests requires careful analysis as the business tier deals with state-full data and ""ordering"" of transactions in some cases would have to be maintained [13] [14].…”
Section: Enhancing Concurrency Of the Critical Path Activitiesmentioning
confidence: 99%
“…To have bounded contention and reduce overheads due to creation and retiring of the threads, it is recommended to maintain a thread pool catering to the functions rather than having each thread for a user/connection. The design of the thread pool and allocating a thread for incoming requests requires careful analysis as the business tier deals with state-full data and ""ordering"" of transactions in some cases would have to be maintained [13] [14].…”
Section: Enhancing Concurrency Of the Critical Path Activitiesmentioning
confidence: 99%
“…The observed degradation in these workloads is a result of multiplexing all threads on a single core. The resulting oversubscribed system is prone to known pathologies from contention on synchronization, load imbalance, convoying, and frequent context switches [9,25,28,29,50]. Demonstrating the penalty of oversubscription.…”
Section: Mitigating Overheads Of Truncated Sprintsmentioning
confidence: 99%
“…In the implementation of synchronization primitives, blocking is sometimes used to give up the processor when waiting for the primitive takes too long. While blocking is definitely a part of lock contention, it is also arguably a part of scheduling, as effectively argued in the work by Johnson [8]. Essentially, the action of giving up the processor to make way for other threads is a scheduling activity, and it may be more convenient, for performance debugging purposes, to treat it as such.…”
Section: A Measuring Software-induced Overheadmentioning
confidence: 99%
“…If, for example, the number of threads exceeds the number of available cores, then the performance of a spin barrier can degrade drastically. A dynamic locking primitive that switches between spinning and blocking implementation depending on the number of runnable threads has been proposed by Johnson [8] and could be used in this case.…”
Section: Inefficient Barrier Implementationmentioning
confidence: 99%