Summary
The Apache Spark framework for distributed computation is popular in the data analytics community due to its ease of use, but its MapReduce‐style programming model can incur significant overheads when performing computations that do not map directly onto this model. One way to mitigate these costs is to off‐load computations onto MPI codes. In recent work, we introduced Alchemist, a system for the analysis of large‐scale data sets. Alchemist calls MPI‐based libraries from within Spark applications, and it has minimal coding, communication, and memory overheads. In particular, Alchemist allows users to retain the productivity benefits of working within the Spark software ecosystem without sacrificing performance efficiency in linear algebra, machine learning, and other related computations. In this paper, we discuss the motivation behind the development of Alchemist, and we provide a detailed overview of its design and usage. We also demonstrate the efficiency of our approach on medium‐to‐large data sets, using some standard linear algebra operations, namely, matrix multiplication and the truncated singular value decomposition of a dense matrix, and we compare the performance of Spark with that of Spark+Alchemist. These computations are run on the NERSC supercomputer Cori Phase 1, a Cray XC40.