No abstract
Convolutional neural networks (CNNs) are emerging as powerful tools for visual recognition. Recent architecture proposals for sparse CNNs exploit natural and transformed zeros in the feature maps and filters for performance and energy without losing accuracy. Sparse architectures that exploit two-sided sparsity in both feature maps and filters have been studied only at small scales (e.g., 1K multiply-accumulate units (MACs)). However, to realize their advantages in full, the sparse architectures have to be scaled up to levels of the dense architectures (e.g., 32K MACs in the TPU). Such scaling is challenging because achieving reuse through broadcasts incurs implicit barrier cost alleviating which raises the inter-related issues of load imbalance, buffering, and on-chip bandwidth demand. SparTen, a previous scheme, addresses one aspect of load balancing but not other aspects, nor the other issues of buffering and bandwidth. To that end, we propose the barrier-free large-scale sparse tensor accelerator (BARISTA). BARISTA (1) is the first architecture for scaling up sparse CNN accelerators; (2) reduces on-chip bandwidth demand by telescoping request-combining the input map requests and snarfing the filter requests; (3) reduces buffering via basic buffer sharing and avoids the ensuing barriers between consecutive input maps by coloring the output buffers; (4) load balances intra-filter work via dynamic round-robin work assignment; and (5) employs hierarchical buffering which achieves high cache bandwidth via a few, wide, shared buffers and low buffering via narrower, private buffers at the compute. Our simulations show that, on average, BARISTA performs 5.4x, 2.2x, 1.7x, 2.5x better than a dense, a one-sided, a naively-scaled two-sided, and an isoarea two-sided architecture, respectively. Using 45-nm technology, ASIC synthesis of our RTL implementation for four clusters of 8K MACs each reports 1 GHz clock speed, 213 mm 2 area and 170 W power.
Convolutional neural networks (CNNs) are emerging as powerful tools for image processing in important commercial applications. We focus on the important problem of improving the latency of image recognition. While CNNs are amenable highly to prefetching and multithreading to avoid memory latency issues, CNNs' large data -each layer's input, filters, and output -poses a memory bandwidth problem. While previous work captures only some of the enormous data reuse, full reuse implies that the initial input image and filters are read once from off chip and the final output is written once off chip without spilling the intermediate layers' data to off-chip. We propose Occam to capture full reuse via four contributions. First, we identify the necessary condition for full reuse. Second, we identify the dependence closure as the sufficient condition to capture full reuse using the least on-chip memory. Third, because the dependence closure is often too large to fit in on-chip memory, we propose a dynamic programming algorithm that optimally partitions a given CNN to guarantee the least off-chip traffic at the partition boundaries for a given on-chip capacity. While tiling is well-known, our contribution is determining the optimal cross-layer tiles. Occam's partitions reside on different chips forming a pipeline so that a partition's filters and dependence closure remain on-chip as different images pass through (i.e., each partition incurs off-chip traffic only for its inputs and outputs). Finally, because the optimal partitions may result in an unbalanced pipeline, we propose staggered asynchronous pipelines (STAP) which replicates the bottleneck stages to improve throughput by staggering the mini-batches across the replicas. Importantly, STAP achieves balanced pipelines without changing Occam's optimal partitioning. Our simulations show that, on average, Occam cuts off-chip transfers by 21x and achieves 2.06x and 1.36x better performance, and 33% and 24% better energy than the base case and Layer Fusion, respectively. Using an FPGA implementation, Occam performs 5.1x better, on average, than the base case.
Convolutional neural networks (CNNs) are emerging as powerful tools for image processing in important commercial applications. We focus on the important problem of improving the latency of image recognition. While CNNs are highly amenable to prefetching and multithreading to avoid memory latency issues, CNNs’ large data – each layer’s input, filters, and output – poses a memory bandwidth problem. While previous work captures only some of the enormous data reuse, full reuse implies that the initial input image and filters are read once from off-chip and the final output is written once off-chip without spilling the intermediate layers’ data to off-chip. We propose Occam to capture full reuse via four contributions. First, we identify the necessary conditions for full reuse. Second, we identify the dependence closure as the sufficient condition to capture full reuse using the least on-chip memory. Third, because the dependence closure is often too large to fit in on-chip memory, we propose a dynamic programming algorithm that optimally partitions a given CNN to guarantee the least off-chip traffic at the partition boundaries for a given on-chip capacity. While tiling is well-known, our contribution determines the optimal cross-layer tiles. Occam’s partitions reside on different chips, forming a pipeline so that a partition’s filters and dependence closure remain on-chip as different images pass through (i.e., each partition incurs off-chip traffic only for its inputs and outputs). Finally, because the optimal partitions may result in an unbalanced pipeline, we propose staggered asynchronous pipelines (STAPs) that replicate bottleneck stages to improve throughput by staggering mini-batches across replicas. Importantly, STAPs achieve balanced pipelines without changing Occam’s optimal partitioning. Our simulations show that, on average, Occam cuts off-chip transfers by 21× and achieves 2.04× and 1.21× better performance, and 33% better energy than the base case, respectively. Using a field-programmable gate array (FPGA) implementation, Occam performs 6.1× and 1.5× better, on average, than the base case and Layer Fusion, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.