Modern computing systems employ significant heterogeneity and specialization to meet performance targets at manageable power. However, memory latency bottlenecks remain problematic, particularly for sparse neural network and graph analytic applications where indirect memory accesses (IMAs) challenge the memory hierarchy.Decades of prior art have proposed hardware and software mechanisms to mitigate IMA latency, but they fail to analyze real-chip considerations, especially when used in SoCs and manycores. In this paper, we revisit many of these techniques while taking into account manycore integration and verification.We present the first system implementation of latency tolerance hardware that provides significant speedups without requiring any memory hierarchy or processor tile modifications. This is achieved through a Memory Access Parallel-Load Engine (MAPLE), integrated through the Network-on-Chip (NoC) in a scalable manner. Our hardware-software co-design allows programs to perform longlatency memory accesses asynchronously from the core, avoiding pipeline stalls, and enabling greater memory parallelism (MLP).In April 2021 we taped out a manycore chip that includes tens of MAPLE instances for efficient data supply. MAPLE demonstrates a full RTL implementation of out-of-core latency-mitigation hardware, with virtual memory support and automated compilation targetting it. This paper evaluates MAPLE integrated with a dualcore FPGA prototype running applications with full SMP Linux, and demonstrates geomean speedups of 2.35× and 2.27× over softwarebased prefetching and decoupling, respectively. Compared to stateof-the-art hardware, it provides geomean speedups of 1.82× and 1.72× over prefetching and decoupling techniques.
CCS CONCEPTS• Computer systems organization → Multicore architectures; Reconfigurable computing; Heterogeneous (hybrid) systems.