2011 IEEE 9th Symposium on Application Specific Processors (SASP) 2011
DOI: 10.1109/sasp.2011.5941086
|View full text |Cite
|
Sign up to set email alerts
|

A hardware acceleration technique for gradient descent and conjugate gradient

Abstract: Gradient descent, conjugate gradient, and other iterative algorithms are a powerful class of algorithms; however, they can take a long time for convergence. Baseline accelerator designs feature insufficient coverage of operations and do not work well on the problems we target. In this thesis we present a novel hardware architecture for accelerating gradient descent and other similar algorithms. To support this architecture, we also present a sparse matrix-vector storage format, and software support for utilizi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…Using this insight, TABLA provides a high-level abstraction for programmers to utilize FPGAs as the accelerator of choice for machine learning algorithms without exposing the details of hardware design. There have also been architectures that accelerate gradient descent [47] and conjugate gradient descent [47][48][49][50]. However, these works do not specialize their architectures in machine learning algorithms or any specific objective function.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Using this insight, TABLA provides a high-level abstraction for programmers to utilize FPGAs as the accelerator of choice for machine learning algorithms without exposing the details of hardware design. There have also been architectures that accelerate gradient descent [47] and conjugate gradient descent [47][48][49][50]. However, these works do not specialize their architectures in machine learning algorithms or any specific objective function.…”
Section: Related Workmentioning
confidence: 99%
“…Gradient descent accelerators. The work by Kesler [47] focuses only on designing an accelerator suitable for different linear algebra operations to facilitate the gradient descent and conjugate gradient algorithms. Machine learning accelerators.…”
Section: Related Workmentioning
confidence: 99%
“…The quadratic nature of the problem allows for CG to converge in fewer iterations (compared to gradient descent), while also naturally tolerating certain types of errors. Additionally, by using a specialized accelerator for linear operations [9], more applications such as Graph Matching can achieve energy benefits by using stochastic processors. For some problems, such as Sorting, it will be remain difficult however, even with a accelerators, to achieve energy benefits with voltage/scaling type models.…”
Section: Application Transformations For Robustnessmentioning
confidence: 99%