Handbook of Computational Statistics 2011
DOI: 10.1007/978-3-642-21551-3_9
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Computing Techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…Temple Lang (1997) described a multithreaded application in S. PVM and MPI are directly available from R via the rpvm, Rmpi, and npRmpi packages. They can all be used in combination with the "snow" package, which facilitates implementation of embarrassingly parallel computations in R (Nakano 2004). There are now more than 20 packages for parallel statistical computing in R. Parallel statistical methods are also available in Matlab, through its Parallel Toolbox.…”
Section: Parallel Statistical Softwarementioning
confidence: 99%
“…Temple Lang (1997) described a multithreaded application in S. PVM and MPI are directly available from R via the rpvm, Rmpi, and npRmpi packages. They can all be used in combination with the "snow" package, which facilitates implementation of embarrassingly parallel computations in R (Nakano 2004). There are now more than 20 packages for parallel statistical computing in R. Parallel statistical methods are also available in Matlab, through its Parallel Toolbox.…”
Section: Parallel Statistical Softwarementioning
confidence: 99%
“…As shown above, this is not the case because in the implementation of a parallel algorithm there are some inherent non-parallelizable parts and communication costs between tasks (Nakano 2012). Amdahl's Law (Amdahl 1967) is often used in parallel computing to predict the theoretical maximum speedup when using multiple processors.…”
Section: Adjustments For Speeding Up the Algorithmmentioning
confidence: 99%
“…By using a machine with P cores/processors, we would like to obtain an increase in calculation speed of P times. However, this is typically not the case because in the implementation of a parallel algorithm there are some inherent non-parallelisable parts and communication costs between tasks (Nakano, 2012). The speedup achieved using P processors is computed as s P = q q q q q 0 2000 4000 6000 8000 1 2 4 8 16…”
Section: Simulation Studymentioning
confidence: 99%