2018
DOI: 10.48550/arxiv.1801.08618
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(36 citation statements)
references
References 0 publications
0
36
0
Order By: Relevance
“…Therefore, when training the DNN with the back-propagation rule in the stochastic gradient descent (SGD) algorithm [12], the workers need to communicate to exchange the intermediate results. The works of JointDNN [8] and JALAD [13] demonstrate the effectiveness of the model parallelism method. However, since the layers of the DNN are trained sequentially, when one worker is computing the others must stay idle.…”
Section: Background and Motivationmentioning
confidence: 89%
See 4 more Smart Citations
“…Therefore, when training the DNN with the back-propagation rule in the stochastic gradient descent (SGD) algorithm [12], the workers need to communicate to exchange the intermediate results. The works of JointDNN [8] and JALAD [13] demonstrate the effectiveness of the model parallelism method. However, since the layers of the DNN are trained sequentially, when one worker is computing the others must stay idle.…”
Section: Background and Motivationmentioning
confidence: 89%
“…2) All-Cloud: The edge device transmits all the training data samples to the cloud center, and the cloud center completes the DNN training. 3) JointDNN [8]: The edge device and the cloud center jointly train the DNNs.…”
Section: Baselinesmentioning
confidence: 99%
See 3 more Smart Citations