Processing large amounts of data, specially in learning algorithms, poses a challenge for current embedded computing systems. Hyperdimensional computing (HDC) is a brain-inspired computing paradigm that works with high-dimensional vectors,
hypervectors
. HDC replaces several complex learning computations with bitwise and simpler arithmetic operations at the expense of an increased amount of data due to mapping the data into high-dimensional space. These hypervectors, more often than not, can’t be stored in memory, resulting in long data transfers from storage. In this paper, we propose Store-n-Learn, an in-storage computing (ISC) solution that performs HDC classification and clustering by implementing encoding, training, retraining, and inference across the flash hierarchy. To hide the latency of training and enable efficient computation, we introduce the concept of
batching
in HDC. We also present on-chip acceleration for HDC encoding in flash planes. This enables us to exploit the high parallelism provided by the flash hierarchy and encode multiple data points in parallel in both batched and non-batched fashion. Store-n-Learn also implements a single top-level FPGA accelerator with novel implementations for HDC classification training, retraining, inference, and clustering on the encoded data. Our evaluation over ten popular datasets shows that Store-n-Learn is on average 222 × (543 ×) faster than CPU and 10.6 × (7.3 ×) faster than the state-of-the-art ISC solution, INSIDER for HDC classification (clustering).