Parallel Programming in OpenMP 2001
DOI: 10.1016/b978-155860671-5/50007-4
|View full text |Cite
|
Sign up to set email alerts
|

Performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0
4

Year Published

2004
2004
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(40 citation statements)
references
References 0 publications
0
36
0
4
Order By: Relevance
“…f10 means that the application enters a critical section approximately every million instructions. 5 As the results show, when the critical section frequency is higher, the execution time differences between half-half and all-fast are reduced. Consequently, half-half in both the 75%-f10000 and 75%-f1000 cases consumes less energy than all-fast.…”
Section: Critical Section Frequency Effectsmentioning
confidence: 50%
See 1 more Smart Citation
“…f10 means that the application enters a critical section approximately every million instructions. 5 As the results show, when the critical section frequency is higher, the execution time differences between half-half and all-fast are reduced. Consequently, half-half in both the 75%-f10000 and 75%-f1000 cases consumes less energy than all-fast.…”
Section: Critical Section Frequency Effectsmentioning
confidence: 50%
“…In dynamic scheduling, each thread is assigned some number of iterations (chunk sets this number) at the start of the loop. After that, each thread requests more iterations after it has completed the work already assigned to it [5]. Guided scheduling is similar to dynamic scheduling except that dynamic scheduling uses a constant chunk size while guided scheduling adjusts the chunk size at runtime.…”
Section: Dynamic Scheduling Effects In Openmpmentioning
confidence: 99%
“…(27)(28)(29)(30) In physically shared memory, where data distribution is less of a problem, parallel regions can be specified by the programmers in order for the compiler to better find concurrency, as in OpenMP. (32) Still implicit and explicit parallel programming styles have their strengths and weaknesses, and hence have their own applicable areas. In particular, implicit parallel programming is easy to use for the programmers, and it explores instruction-level parallelism, whereas explicit parallel programming requires more input from the programmers, and is good at higher level task-parallel problems.…”
Section: Data Sharingmentioning
confidence: 99%
“…tfun int fib(int n) { if (n<2) return 1; return (fib(n-1)+fib(n-2)); } tfun int main (int argc, char *argv[]) { int n = atoi(argv [1]); printf("Fibonacci %d is %d\n",n,(int)fib(n)); return 0; } Casting (int)fib(n) is necessary to make main thread to wait for other threads to complete. Open TS runtime support library relies on MPI for communication in cluster environment, while addtional options are available (PVM, and TCP/IP when MPI is not applicable).…”
Section: Opents: T-system Implementationmentioning
confidence: 99%
“…The Open MP [1] may be considered as the most widely used implementation of the first approach: number of threads created is in particular section is defined equal to number of CPUs available. However, Open MP is mostly applied to loop parallelization, when loop iterations have approximately equal CPU instructions.…”
Section: Related Workmentioning
confidence: 99%