2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA) 2018
DOI: 10.1109/isca.2018.00060
|View full text |Cite
|
Sign up to set email alerts
|

GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks

Abstract: Generative Adversarial Networks (GANs) are one of the most recent deep learning models that generate synthetic data from limited genuine datasets. GANs are on the frontier as further extension of deep learning into many domains (e.g., medicine, robotics, content synthesis) requires massive sets of labeled data that is generally either unavailable or prohibitively costly to collect. Although GANs are gaining prominence in various fields, there are no accelerators for these new models. In fact, GANs leverage a n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
60
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 80 publications
(61 citation statements)
references
References 57 publications
1
60
0
Order By: Relevance
“…We observe that stereo DNNs make heavy use of the deconvolution operation 1 that exposes specific kernel sparsity, making conventional DNN accelerators inefficient. While prior work proposed specialized hardware to exploit deconvolution sparsity [60,76], we demonstrate that static software optimizations achieve better results without unnecessary hardware modifications.…”
Section: Introductionmentioning
confidence: 82%
See 2 more Smart Citations
“…We observe that stereo DNNs make heavy use of the deconvolution operation 1 that exposes specific kernel sparsity, making conventional DNN accelerators inefficient. While prior work proposed specialized hardware to exploit deconvolution sparsity [60,76], we demonstrate that static software optimizations achieve better results without unnecessary hardware modifications.…”
Section: Introductionmentioning
confidence: 82%
“…Stereo vision DNNs make use of deconvolution layers, which expose structured sparsity patterns. Recent work has prosed specialized hardware specifically for exploiting sparsity in deconvolution layers [60,76]. Our observation, however, is that mitigating sparsityinduced efficiencies in deconvolution does not necessarily require hardware support.…”
Section: Related Workmentioning
confidence: 98%
See 1 more Smart Citation
“…Few works have focused on accelerating deconvolutions [6,11,12]. In [11,12], the researchers addressed the accelerations of the deconvolution in generative adversarial networks(GANs).…”
Section: Related Workmentioning
confidence: 99%
“…The initial catalyst for this rise in popularity was the discovery that MI could produce low error rates for image classification [1][2] [3]. Subsequently, there has been a large amount of work optimizing hardware for MI, especially for Convolutional Neural Networks (CNNs) (e.g., [17]- [30]). Although these works have led to significant improvements in performance and energy efficiency of CNNs on modern multi-core CPUs, GPUs, and accelerators, it is challenging to analyze how future architectures will perform for these workloads.…”
Section: Introductionmentioning
confidence: 99%