SUMMARYWe consider the suitability of the Java concurrent constructs for writing high-performance SPMD code for parallel machines. More specifically, we investigate implementing a financial application in Java on a distributed-memory parallel machine. Despite the fact that Java was not expressly targeted to such applications and architectures per se, we conclude that efficient implementations are feasible. Finally, we propose a library of Java methods to facilitate SPMD programming. ©1997 by John Wiley & Sons, Ltd.
MOTIVATIONAlthough Java was not specifically designed as a high-performance parallel-computing language, it does include concurrent objects (threads), and its widespread acceptance makes it an attractive candidate for writing portable computationally-intensive parallel applications. In particular, Java has become a popular choice for numerical financial codes, an example of which is arbitrage -detecting when the buying and selling of securities is temporarily profitable. These applications involve sophisticated modeling techniques such as successive over-relaxation (SOR) and Monte Carlo methods [1]. Other numerical financial applications include data mining (pattern discovery) and cryptography (secure transactions).In this paper, we use an SOR code for evaluating American options (see Figure 1)[1], to explore the suitability of using Java as a high-performance parallel-computing language. This work is being conducted in the context of a research effort to implement a Java runtime system (RTS) for the IBM POWERparallel System SP machine[2], which is designed to effectively scale to large numbers of processors. The RTS is being written in C with calls to MPI (message passing interface) [3] routines. Plans are to move to a Java plus MPI version when one becomes available.The typical programming idiom for highly parallel machines is called data-parallel or single-program multiple-data (SPMD), where the data provide the parallel dimension. Parallelism is conceptually specified as a loop whose iterates operate on elements of a, perhaps multidimensional, array. Data dependences between parallel-loop iterates lead to a producer-consumer type of sharing, wherein one iterate writes variables that are later read by another, or collective communication, wherein all iterates participate. The communication pattern between iterates is often very regular, for example a bidirectional flow of variables between consecutive iterates (as in the code in Figure 1). This paper explores the suitability of the Java concurrency constructs for writing SPMD programs. In particular, the paper:1. identifies the differences between the parallelism supported by Java and data parallelism