2022
DOI: 10.4218/etrij.2022-0103
|View full text |Cite
|
Sign up to set email alerts
|

A novel MobileNet with selective depth multiplier to compromise complexity and accuracy

Abstract: In the last few years, convolutional neural networks (CNNs) have demonstrated good performance while solving various computer vision problems. However, since CNNs exhibit high computational complexity, signal processing is performed on the server side. To reduce the computational complexity of CNNs for edge computing, a lightweight algorithm, such as a MobileNet, is proposed. Although MobileNet is lighter than other CNN models, it commonly achieves lower classification accuracy. Hence, to find a balance betwee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 35 publications
0
1
0
Order By: Relevance
“…Depthwise separable convolution uses two layers called depthwise convolution (DW) pointwise convolution (PW) to reduce calculations. a k×k kernel is utilized in the depthwise convolution layer and then, the pointwise convolution layer uses m kernel numbers c×1×1 to generate new feature maps [34]. Mathematical calculations of separable Convolution 2D are defined as follows [35]:…”
Section: N X M N H M N X I J H M I N Jmentioning
confidence: 99%
“…Depthwise separable convolution uses two layers called depthwise convolution (DW) pointwise convolution (PW) to reduce calculations. a k×k kernel is utilized in the depthwise convolution layer and then, the pointwise convolution layer uses m kernel numbers c×1×1 to generate new feature maps [34]. Mathematical calculations of separable Convolution 2D are defined as follows [35]:…”
Section: N X M N H M N X I J H M I N Jmentioning
confidence: 99%
“…As an extension of CQTNet, we propose a CNN model, termed as CQTXNet, designed based on depthwise separable convolution layers with residual connections adhering to the Xception [7] and Resnet [8] architectures. In recently proposed network architectures, depth-wise separable convolution has become fundamental by replacing the more computationally burdensome regular convolution with a negligible performance difference [9], [10]. Residual connections [8] enable network parameter gradients to propagate easily from the output to previous layers, which is conducive to increasing the depth of CNN without convergence problem during training and performance degradation while testing.…”
Section: Introductionmentioning
confidence: 99%