2009
DOI: 10.1016/j.cpc.2009.05.002
|View full text |Cite
|
Sign up to set email alerts
|

Parallel programming interface for distributed data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…At the start of a job, a specified number of identical instances of the executable is started, and in many parts of the program, work is distributed between the processors, with appropriate communication to share data and consolidate results. The support for these communication functions is provided by a software layer parallel programming interface for distributed data (PPIDD)102,103 that itself can be built to sit on top of the Global Arrays (GA) toolkit, or any standard MPI library. PPIDD supports global data structures that can be accessed by any process without message exchange or other reference to other processes.…”
Section: Technical Featuresmentioning
confidence: 99%
“…At the start of a job, a specified number of identical instances of the executable is started, and in many parts of the program, work is distributed between the processors, with appropriate communication to share data and consolidate results. The support for these communication functions is provided by a software layer parallel programming interface for distributed data (PPIDD)102,103 that itself can be built to sit on top of the Global Arrays (GA) toolkit, or any standard MPI library. PPIDD supports global data structures that can be accessed by any process without message exchange or other reference to other processes.…”
Section: Technical Featuresmentioning
confidence: 99%
“…The amplitudes seem to be replicated to simplify parallelization. The distributed data abstraction, implemented on top of the GA toolkit, used to implement the parallel CC methods in Molpro was described by Wang et al 287 5.2.6. GAMESS.…”
Section: Ccsdmentioning
confidence: 99%
“…Accessing the remote node memory is handled without the remote processes's explicit involvement, hence this is also referred to as one-sided communication. A variety of libraries are available that provide support for this paradigm, such as GA/ARMCI, 42 DDI, 43,44 Linda, 45 PPIDD, 46 and MPI-2. 47 It should be noted that MPI-2, although it extends MPI among other things with one-sided communication primitives, leaves it up to the programmer to manage the exact data distribution and every single data transmission.…”
Section: Internode Communicationmentioning
confidence: 99%