2022
DOI: 10.1109/mdat.2021.3095215
|View full text |Cite
|
Sign up to set email alerts
|

EDLAB: A Benchmark for Edge Deep Learning Accelerators

Abstract: A new trend tends to deploy deep learning algorithms to edge environments to mitigate privacy and latency issues from cloud computing. Diverse edge deep learning accelerators are devised to speed up the inference of deep learning algorithms on edge devices. Various edge deep learning accelerators feature different characteristics in terms of power and performance, which make it a very challenging task to efficiently and uniformly compare different accelerators. In this paper, we introduce EDLAB, an end-to-end … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…The importance factors in the white dashed box correspond to the weight that needs to be "removed". Subsequently, an automatic script [12] is employed to measure the real latency of the compressed model on the target edge device. It is worth noting that the latency test and model training are conducted separately on the cloud and edge device, respectively, and are executed in parallel without any mutual interference.…”
Section: Self-evolving Latency Predictormentioning
confidence: 99%
“…The importance factors in the white dashed box correspond to the weight that needs to be "removed". Subsequently, an automatic script [12] is employed to measure the real latency of the compressed model on the target edge device. It is worth noting that the latency test and model training are conducted separately on the cloud and edge device, respectively, and are executed in parallel without any mutual interference.…”
Section: Self-evolving Latency Predictormentioning
confidence: 99%
“…Reference [20] investigates the on-the-edge inference of DNNs in terms of latency, energy consumption, and temperature, on five different hardware platforms; unlike the proposed method, this work does not take advantage of the optimization frameworks we have investigated. In [21], an in-depth benchmark analysis of three embedded platforms is performed for image vision applications including MobileNet and InceptionV2; in [22], EDLAB is delivered, an end-to-end benchmark to evaluate the overall performance of three image classification and one object detection models across Intel NCS2, Edge TPU and Jetson Xavier NX. In [23], a performance analysis of the edge TPU board is provided for object classification.…”
Section: Related Workmentioning
confidence: 99%
“…However, modern CNNs are usually equipped with billions of operations. For example, the most popular CNN model, ResNet50 [1], has 4.1 billion Multiply-Accumulate operations (MACs), which are computationally prohibitive for embedded hardware [4].…”
Section: Introductionmentioning
confidence: 99%