2023
DOI: 10.1109/tnse.2022.3154412
|View full text |Cite
|
Sign up to set email alerts
|

ABM-SpConv-SIMD: Accelerating Convolutional Neural Network Inference for Industrial IoT Applications on Edge Devices

Abstract: Convolutional Neural Networks (CNNs) have been widely deployed, while traditional cloud data-centers based applications suffer from the bandwidth and latency network demand when applying to Industrial-Internet-of-Things (IIoT) fields. It is critical to migrate the CNNs inference to edge devices for efficiency and security concerns. However, it is challenging to deploy complex CNNs on resource-constraint IIoT edge devices due to a large number of parameters and intensive floating-point computations. In this pap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 52 publications
0
6
0
Order By: Relevance
“…However, it is difficult to find the same elements in a small kernel, while small kernels are the trend. The authors of [14,19] have confirmed this issue. The authors expanded the scope of sharing the same weights into several kernels to implement ABM-SpConv, which increases the complexity of the hardware architecture.…”
Section: Related Workmentioning
confidence: 71%
See 2 more Smart Citations
“…However, it is difficult to find the same elements in a small kernel, while small kernels are the trend. The authors of [14,19] have confirmed this issue. The authors expanded the scope of sharing the same weights into several kernels to implement ABM-SpConv, which increases the complexity of the hardware architecture.…”
Section: Related Workmentioning
confidence: 71%
“…Another type of method focuses on reducing the use of multiplier units in the fil-ter_loop through the optimized design of the hardware, such as ABM-SpConv [9,14,19], MF-Conv [13], and the weight-sharing (WS) technique [20][21][22]. The key to ABM-SpConv [9] is to perform the multiplication operations and accumulation operations of convolution in two separate stages.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The utilization of Raspberry Pi extended beyond security applications. For example, ABM-SpConv-SIMD, an on-device inference optimization framework, was proposed by [25]. Implemented on Raspberry Pi devices, this framework aimed to accelerate network inference by fully utilizing the low-cost and common CPU resources.…”
Section: Raspberry Pimentioning
confidence: 99%
“…Researchers also showed an improved latency by optimizing graphical unit communication during computing for image classification tasks [50]. Scientists proposed a new framework using an acceleration approach to improve inference latency, known as ABM-SpConv-SIMD [51].…”
Section: Transformers In Computer Visionmentioning
confidence: 99%