TOWARDS ACCELERATORSBecause of increasingly stringent energy constraints (e.g., Dark Silicon, there is a growing consensus in the community that we may be moving towards heterogeneous multi-core architectures, composed of a mix of cores and accelerators. Because our community is traditionally focused on general-purpose computing, we have been especially considering accelerator approaches such as GPUs and reconfigurable circuits. An attractive alternative is to investigate accelerators which are focused on a few key algorithms: key algorithms still mean broad application scope, but few algorithms enable energy efficient and cost-effective accelerators.Assuming we want to go down the path of multi-purpose accelerators for energy reasons, the main question becomes: which applications should be considered ? The PARSEC benchmarks have been introduced to highlight a trend towards a new kind of high-performance applications (e.g., voice recognition, image analysis, navigation, etc). Interestingly, many of the core tasks of these benchmarks turn out to correspond to inherently stochastic algorithms/tasks, such as clustering, classification, optimization, filtering, approximation algorithms, i.e., tasks which are inherently tolerant to a certain degree of inaccuracy.What accelerator design would then be appropriate ? When considering the need for energy efficiency and faults/defects tolerance, as well as the nature of the emerging high-performance applications, hardware neural networks come across as an attractive alternative accelerator design. While Neural Networks (NNs) are often considered as a niche application, one can consider all the aforementioned applications: many of the emerging high-performance applications are based on machine-learning techniques, and there are competitive alternatives based on neural networks for all five aforementioned core algorithms (clustering, classification, optimization, filtering, approximation). As a result, NNs are much more a kernel based on which many applications can be developed rather than a niche algorithm. At the same time, an NN circuit is much closer to an ASIC than a processor, so an NN accelerator can potentially achieve the energy efficiency of an ASIC while still having the broad aforementioned application scope. Finally, one of the most attractive properties of neural networks are their inherent robustness to faults. Thanks to its learning algorithm, an NN can automatically silence out faulty parts through retraining, without having to identify or disable these faults, an attractive feature. WHY HARDWARE NEURAL NETWORKS AGAIN ?Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.However, NNs are certainly not a new c...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.