2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA) 2015
DOI: 10.1109/hpca.2015.7056067
|View full text |Cite
|
Sign up to set email alerts
|

BRAINIAC: Bringing reliable accuracy into neurally-implemented approximate computing

Abstract: Applications with large amounts of data, real-time constraints, ultra-low power requirements, and heavy computational complexity present significant challenges for modern computing systems, and often fall within the category of high performance computing (HPC). As such, computer architects have looked to high performance single instruction multiple data (SIMD) architectures, such as accelerator-rich platforms, for handling these workloads. However, since the results of these applications do not always require … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
21
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(21 citation statements)
references
References 86 publications
0
21
0
Order By: Relevance
“…Recently, approximate computing has emerged as an alternative approach for addressing potential timing failures with less overheads than the ones incurred by the conventional guardbandbased techniques [5], [11]. Existing studies have showcased the inherent resiliency of various signal/image processing [12], [13], machine learning [12], [14] and scientific computation algorithms [13] to faults or inaccurate operations. Most of the existing studies have indicated that any approximation should be applied only to error-resilient code or data regions in applications, since uniform approximation of all data may result in significant quality degradation [15], [16].…”
mentioning
confidence: 99%
“…Recently, approximate computing has emerged as an alternative approach for addressing potential timing failures with less overheads than the ones incurred by the conventional guardbandbased techniques [5], [11]. Existing studies have showcased the inherent resiliency of various signal/image processing [12], [13], machine learning [12], [14] and scientific computation algorithms [13] to faults or inaccurate operations. Most of the existing studies have indicated that any approximation should be applied only to error-resilient code or data regions in applications, since uniform approximation of all data may result in significant quality degradation [15], [16].…”
mentioning
confidence: 99%
“…Robust techniques use predictive, online learning, light-weight checks and monitoring strategies to compute quality loss and predict the extent of quality loss for subsequent inputs [56] [61]. The quality loss is compared against user defined accuracy requirements to either tone down aggressive approximation or choose a different type of approximation technique [62] [63]. In case of unacceptable results, these approaches roll back i.e., re-execute the candidate code blocks in accurate mode to cover for the accuracy loss.…”
Section: Qosmentioning
confidence: 99%
“…Recent work has explored a variety of approximation techniques that include: (a) approximate storage designs [38,39] that trades quality of data for reduced energy [38] and longer lifetime [39], (b) voltage overscaling [28,40,41], (c) loop perforation [30,42,43], (d) loop early termination [29], (e) computation substitution [6,9,29,44], (f) memoization [7,8,45], (g) limited fault recovery [42,[46][47][48][49][50], (h) precision scaling [16,51], (i) approximate circuit synthesis [19,[52][53][54][55][56][57], and (j) neural acceleration [10][11][12][13][14][15].…”
Section: Related Workmentioning
confidence: 99%
“…This characteristic of many GPU applications provides a unique opportunity to devise approximation techniques that trade small losses in the quality of results for significant gains in performance and efficiency. Among approximation techniques, neural acceleration provides significant gains for CPUs [10][11][12][13][14] and may be a good candidate for GPUs. Neural acceleration relies on an automated algorithmic transformation that converts an approximable segment of code 1 to a neural network.…”
Section: Introductionmentioning
confidence: 99%