Proceedings of the 4th International Workshop on Embedded and Mobile Deep Learning 2020
DOI: 10.1145/3410338.3412338
|View full text |Cite
|
Sign up to set email alerts
|

Split Computing for Complex Object Detectors

Abstract: Following the trends of mobile and edge computing for DNN models, an intermediate option, split computing, has been attracting attentions from the research community. Previous studies empirically showed that while mobile and edge computing often would be the best options in terms of total inference time, there are some scenarios where split computing methods can achieve shorter inference time. All the proposed split computing approaches, however, focus on image classification tasks, and most are assessed with … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 22 publications
(23 citation statements)
references
References 22 publications
1
22
0
Order By: Relevance
“…However, splitting at late layers would position most of the computational complexity at the weaker mobile device. This issue was recently discussed in [3], [10] for image classification and object detection models, reinforcing the results obtained in [7] on traditional network splitting.…”
Section: A Backgroundsupporting
confidence: 69%
See 3 more Smart Citations
“…However, splitting at late layers would position most of the computational complexity at the weaker mobile device. This issue was recently discussed in [3], [10] for image classification and object detection models, reinforcing the results obtained in [7] on traditional network splitting.…”
Section: A Backgroundsupporting
confidence: 69%
“…This paper builds on this approach [3], [10] to obtain innetwork compression with further improved detection performance in object detection tasks. Specifically, we generalize the head network distillation technique, and apply it to the state of the art detection models described in the previous section (Faster R-CNN, Mask R-CNN, and Keypoint R-CNN).…”
Section: A Backgroundmentioning
confidence: 99%
See 2 more Smart Citations
“…• We apply BottleFit on cutting-edge CNNs such as DenseNet-169, DenseNet-201 and ResNet-152 on the ImageNet dataset, and compare the accuracy obtained by BottleFit with state-of-the-art local computing [6] and split computing approaches [16]- [19], [21], [24]. Our training campaign concludes that BottleFit achieves up to 77.1% data compression (with respect to JPEG) with only up to 0.6% loss in accuracy, while existing mobile and split computing approaches incur considerable accuracy loss of up to 6% and 3.6%, respectively.…”
Section: Introductionmentioning
confidence: 99%