2020 Eighth International Symposium on Computing and Networking Workshops (CANDARW) 2020
DOI: 10.1109/candarw51189.2020.00036
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Deep Learning of ResNet50 and VGG16 with Pipeline Parallelism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…Network Structure. In this study, the SSD model with VGG16 [17,19,20] as the main network was selected.…”
Section: Lead Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Network Structure. In this study, the SSD model with VGG16 [17,19,20] as the main network was selected.…”
Section: Lead Networkmentioning
confidence: 99%
“…To solve this problem, we propose a composite backbone SSD (CBSSD) object detection method. Based on the CBNet network, we introduce the ResNet50 [16][17][18]…”
Section: Introductionmentioning
confidence: 99%
“…In particular, the ResNet network is the most representative, which deepens the training depth of the network, and alleviates the problems of gradient disappearance or explosion, network degradation, and so on. Many studies have used ResNet to improve the semantic segmentation model and have achieved relatively successful results [27][28][29]. Therefore, this paper chooses to use the ResNet-50 network to fuse FCN, U-net, and DeeplabV3+ models to improve the recognition accuracy of cyanobacteria blooms.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, there are a million parameters defining a deep learning model, which requires large amounts of data to learn from it and is a computationally intensive process. Especially, when the data size and the deep learning models become larger and more complicated, training a model within a considerate period usually demands more hardware memory and computing power such as parallel and distributed computing [2] [3] [4] including data parallelism [5], model parallelism [6], pipeline parallelism [7] and hybrid parallelism [8]. Recently, various distributed deep learning frameworks such as Caffe-MPI [9], TensorFlow [10], MXNet [11], Chainer [12], CNTK [13]) have been proposed, which provide basic building blocks for designing effective neural network models for targeted applications.…”
Section: Introductionmentioning
confidence: 99%