2021
DOI: 10.1007/978-3-030-85665-6_13
|View full text |Cite
|
Sign up to set email alerts
|

Efficient and Systematic Partitioning of Large and Deep Neural Networks for Parallelization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…(iii) The operators are not executed separately. They are all connected in the computational graph via the edges [19]. The output tensor of an operator is also the input tensor of its successive operator.…”
Section: Distributed Strategies Searchingmentioning
confidence: 99%
See 2 more Smart Citations
“…(iii) The operators are not executed separately. They are all connected in the computational graph via the edges [19]. The output tensor of an operator is also the input tensor of its successive operator.…”
Section: Distributed Strategies Searchingmentioning
confidence: 99%
“…5 A zipper structure representing the path highlighted of the binary tree methods are consequently suitable for coarser grained graph representation, whereas a finer-grained representation provides more possibilities. This work does not provide implementation details because a more practical point of view was already given in previous work [19].…”
Section: From Tree To Graphmentioning
confidence: 99%
See 1 more Smart Citation