1997
DOI: 10.1002/(sici)1096-9128(199710)9:10<915::aid-cpe277>3.0.co;2-c
|View full text |Cite
|
Sign up to set email alerts
|

Message-passing performance of various computers

Abstract: This report compares the performance of different computer systems for basic message passing. Latency and bandwidth are measured on Convex, Cray, IBM, Intel, KSR, Meiko, nCUBE, NEC, SGI and TMC multiprocessors. Communication performance is contrasted with the computational power of each system. The comparison includes both shared and distributed memory computers as well as networked workstation clusters. © 1997 John Wiley & Sons, Ltd.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

1999
1999
2011
2011

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…However, for applications that would also like to utilize the cluster resources to achieve greater scalability, explicit message passing is used. Although applying a SAS model to cluster computing is feasible, to achieve the best computational performance and scalability results, a message passing model is preferred (Shan et al, 2003;Dongarra & Dunigan, 1997). Scopira includes support for two well established message passing interfaces, MPI and PVM, as well as a custom, embedded, object-oriented message passing interface designed for ease of use and deployment.…”
Section: Parallel Processingmentioning
confidence: 99%
“…However, for applications that would also like to utilize the cluster resources to achieve greater scalability, explicit message passing is used. Although applying a SAS model to cluster computing is feasible, to achieve the best computational performance and scalability results, a message passing model is preferred (Shan et al, 2003;Dongarra & Dunigan, 1997). Scopira includes support for two well established message passing interfaces, MPI and PVM, as well as a custom, embedded, object-oriented message passing interface designed for ease of use and deployment.…”
Section: Parallel Processingmentioning
confidence: 99%
“…As noted in [7], most of the existing methods consider minimization of the total message volume. Depending on the machine architecture and problem characteristics, communication overhead due to message latency may be a bottleneck as well [5]. Furthermore, the maximum message volume and latency handled by a single processor may also have crucial impact on the parallel performance [10,11].…”
Section: Introductionmentioning
confidence: 99%
“…As noted in [10], most of the existing models consider minimizing the total communication volume. Depending on the machine architecture and the problem characteristics, communication overhead due to message latency may be a bottleneck as well [8]. Furthermore, maximum communication volume and latency handled by a single processor may also have crucial impacts on the parallel performance.…”
Section: Introductionmentioning
confidence: 99%