2018 IEEE International Symposium on Workload Characterization (IISWC) 2018
DOI: 10.1109/iiswc.2018.8573476
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking and Analyzing Deep Neural Network Training

Abstract: The recent popularity of deep neural networks (DNNs) has generated a lot of research interest in performing DNN-related computation efficiently. However, the primary focus is usually very narrow and limited to (i) inference -i.e. how to efficiently execute already trained models and (ii) image classification networks as the primary benchmark for evaluation.Our primary goal in this work is to break this myopic view by (i) proposing a new benchmark for DNN training, called TBD 1 , that uses a representative set … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
75
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 128 publications
(88 citation statements)
references
References 71 publications
3
75
0
Order By: Relevance
“…DNN performance analysis and optimization Current publicly available benchmarks [3,8,19,55] for DNNs fo-cus on neural networks with FC, CNN, and RNN layers only.…”
Section: Related Workmentioning
confidence: 99%
“…DNN performance analysis and optimization Current publicly available benchmarks [3,8,19,55] for DNNs fo-cus on neural networks with FC, CNN, and RNN layers only.…”
Section: Related Workmentioning
confidence: 99%
“…Zhu et al [15] study the training performance and resource utilization of eight deep learning model models implemented on three machine learning frameworks running on servers (not mobile devices) across different hardware configurations. However, they do not consider power and energy efficiency.…”
Section: Related Workmentioning
confidence: 99%
“…However, there is no discussion in the text about potential scalability effects beyond 8 GPUs. An extensive performance analysis and profiling of DNN training is performed in [31], where eight state-of-the-art DNN models are implemented on three major deep learning frameworks (TensorFlow, MXNet, and CNTK). The objective is to evaluate the efficiency of training for different hardware configurations (single-/multi-GPU and multi-machine).…”
Section: Related Workmentioning
confidence: 99%