Processing systems are in continuous evolution thanks to the constant technological advancement and architectural progress. Over the years, computing systems have become more and more powerful, providing support for applications, such as Machine Learning, that require high computational power. However, the growing complexity of modern computing units and applications has had a strong impact on power consumption. In addition, the memory plays a key role on the overall power consumption of the system, especially when considering data-intensive applications. These applications, in fact, require a lot of data movement between the memory and the computing unit. The consequence is twofold: Memory accesses are expensive in terms of energy and a lot of time is wasted in accessing the memory, rather than processing, because of the performance gap that exists between memories and processing units. This gap is known as the memory wall or the von Neumann bottleneck and is due to the different rate of progress between complementary metal–oxide semiconductor (CMOS) technology and memories. However, CMOS scaling is also reaching a limit where it would not be possible to make further progress. This work addresses all these problems from an architectural and technological point of view by: (1) Proposing a novel Configurable Logic-in-Memory Architecture that exploits the in-memory computing paradigm to reduce the memory wall problem while also providing high performance thanks to its flexibility and parallelism; (2) exploring a non-CMOS technology as possible candidate technology for the Logic-in-Memory paradigm.
In most computational systems memory access represents a relevant bottleneck for circuits performance. The execution speed of algorithms is severely limited by memory access time. An emerging technology like NanoMagnet Logic (NML), where its magnetic nature leads to an intrinsic memory ability, represents therefore a very promising opportunity to solve this issue. NanoMagnet Logic is the ideal candidate to implement the so called Logic-In-Memory (LIM) architecture. But how is it possible to organize an architecture where logic and memory are mixed and not separated entities?In this paper we try to address this issue presenting our recent developments on LIM architectures. We originally conceived a LIM architecture without considering any technological constraints. Here we present the first adaptation of that architecture to NanoMagnet Logic technology. The architecture is based on an array of identical cells developed on three virtual layers, one for logic, one for memory and one for information routing. These three virtual layers are mapped on two physical layers exploiting all our recent improvements on NanoMagnet Logic technology, which are validated with the help of low level simulations. The structure has been tested implementing two different algorithms, a sort algorithm and an image manipulation algorithm. A complete characterization in terms of area and power is reported. The structure here presented is therefore the first step of an ongoing effort directed toward the development of truly innovative architectures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.