Abstract-In the near future, cameras will be used everywhere as flexible sensors for numerous applications. For mobility and privacy reasons, the required image processing should be local on embedded computer platforms with performance requirements and energy constraints. Dedicated acceleration of Convolutional Neural Networks (CNN) can achieve these targets with enough flexibility to perform multiple vision tasks. A challenging problem for the design of efficient accelerators is the limited amount of external memory bandwidth. We show that the effects of the memory bottleneck can be reduced by a flexible memory hierarchy that supports the complex data access patterns in CNN workload. The efficiency of the on-chip memories is maximized by our scheduler that uses tiling to optimize for data locality. Our design flow ensures that on-chip memory size is minimized, which reduces area and energy usage. The design flow is evaluated by a High Level Synthesis implementation on a Virtex 6 FPGA board. Compared to accelerators with standard scratchpad memories the FPGA resources can be reduced up to 13x while maintaining the same performance. Alternatively, when the same amount of FPGA resources is used our accelerators are up to 11x faster.
Document VersionPublisher's PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.• The final author version and the galley proof are versions of the publication after peer review.• The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publicationCitation for published version (APA): Bekooij, M. J. G., Hoes, R. J. H., Moreira, O., Poplavko, P., Pastrnak, M., Mesman, B., ... Meerbergen, van, J. (2005). Dataflow analysis for real-time embedded multiprocessor system design. In P. Stok, van der (Ed.), Dynamic and Robust Streaming in and between Connected Consumer-Electronic Devices (pp. 81-108). (Philips research book series; Vol. 3). Dordrecht: Springer.
We consider a dynamic application running on a multiprocessor network-on-chip as a set of independent jobs, each job possibly running on multiple processors. To provide guaranteed quality and performance, the scheduling of jobs, jobs themselves and the hardware must be amenable to timing analysis. For a certain class of applications and multiprocessor architectures, we propose exact timing models that effectively co-model both the computation and communication of a job. The models are based on interprocessor communication (IPC) graphs [4]. Our main contribution is a precise model of network-on-chip communication, including buffer models. We use a JPEG-decoder job as an example to demonstrate that our models can be used in practice to derive upper bounds on the job execution time and to reason about optimal buffer sizes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.