As CMOS scaling reaches its technological limits, a radical departure from traditional von Neumann systems, which involve separate processing and memory units, is needed in order to significantly extend the performance of today's computers. In-memory computing is a promising approach in which nanoscale resistive memory devices, organized in a computational memory unit, are used for both processing and memory. However, to reach the numerical accuracy typically required for data analytics and scientific computing, limitations arising from device variability and non-ideal device characteristics need to be addressed. Here we introduce the concept of mixed-precision in-memory computing, which combines a von Neumann machine with a computational memory unit. In this hybrid system, the computational memory unit performs the bulk of a computational task, while the von Neumann machine implements a backward method to iteratively improve the accuracy of the solution. The system therefore benefits from both the high precision of digital computing and the energy/areal efficiency of in-memory computing. We experimentally demonstrate the efficacy of the approach by accurately solving systems of linear equations, in particular, a system of 5, 000 equations using 998, 752 phase-change memory devices.
In-memory computing is a promising non-von Neumann approach where certain computational tasks are performed within resistive memory units by exploiting their physical attributes. In this paper, we propose a new method for fast and robust compressed sensing of sparse signals with approximate message passing recovery using in-memory computing. The measurement matrix for compressed sensing is encoded in the conductance states of resistive memory devices organized in a crossbar array. This way, the matrix-vector multiplications associated with both the compression and recovery tasks can be performed by the same crossbar array without intermediate data movements at potential O(1) time complexity. For a signal of size N, the proposed method achieves a potential O(N)fold recovery complexity reduction compared with a standard software approach. We show the array-level robustness of the scheme through large-scale experimental demonstrations using more than 256k phase-change memory devices.
Computational memory (CM) is a promising nonvon Neumann approach where certain computational tasks are performed within resistive memory units by exploiting their physical attributes. We propose a new method for fast and robust compressed sensing (CS) recovery of sparse signals using CM. For a signal of size , this method achieves a potential ( ) -fold complexity reduction compared with a standard software approach. Large-scale experimental demonstrations using more than 256k phase-change memory (PCM) devices are presented along with an in-depth device analysis and array-level considerations.
Finite difference methods are widely used, highly parallel algorithms for solving differential equations. However, the algorithms are memory bound and thus difficult to implement efficiently on CPUs or GPUs. In this work we study the implementation of the finite difference time domain (FDTD) method for solving Maxwell's equations on an FPGA-based Maxeler dataflow computer. We evaluate our work with actual problems from the domain of computational nanophotonics. The use of realistic simulations requires us to pay special attention to boundary conditions (Dirichlet, periodic, absorbing), which are critical for the correctness of results but detrimental to the performance and thus frequently neglected. We discuss and evaluate the design of two different FDTD implementations, which outperform CPU and GPU implementations. To our knowledge, our implementation is the fastest FPGA-based FDTD solver.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.