2022
DOI: 10.1109/access.2022.3175818
|View full text |Cite
|
Sign up to set email alerts
|

MobileNetV3 With CBAM for Bamboo Stick Counting

Abstract: Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…Depth-wise separable convolutions employ M × K × K filters that can be represented by a K × K template and M parameters that distribute the template in the depth dimension; this observation has motivated the creation of blueprint-separable convolutions. Every filter kernel F(n) can be depicted using a blueprint B(n) and the weights wn, 1, …, wn, M via F(n)m,:,: = wn, m * B(n) (5) with m in {1, …, M: number of kernels in one filter} and n in {1, …, N: number of filters in one layer}. Figure 13 illustrates the blueprint separable convolutions and their differences from standard convolutions.…”
Section: Blueprint Separable Convolutions To Replace Depth-wise Separ...mentioning
confidence: 99%
See 1 more Smart Citation
“…Depth-wise separable convolutions employ M × K × K filters that can be represented by a K × K template and M parameters that distribute the template in the depth dimension; this observation has motivated the creation of blueprint-separable convolutions. Every filter kernel F(n) can be depicted using a blueprint B(n) and the weights wn, 1, …, wn, M via F(n)m,:,: = wn, m * B(n) (5) with m in {1, …, M: number of kernels in one filter} and n in {1, …, N: number of filters in one layer}. Figure 13 illustrates the blueprint separable convolutions and their differences from standard convolutions.…”
Section: Blueprint Separable Convolutions To Replace Depth-wise Separ...mentioning
confidence: 99%
“…Our research addresses these limitations by introducing key modifications to the MobileNetV3 architecture tailored for parking lot occupancy detection. This modified version incorporates several architectural improvements, including the use of a Leaky-ReLU6 [3] activation function for the shallow part of the MobileNetV3 model, the replacement of the squeeze-and-excitation module [4] with the convolution block attention module [5], and the replacement of the depth-wise separable convolutions with blueprint separable convolutions [6]. We treat the automatic detection of vacant spaces as a binary classification problem and train and test the improved model on widely used parking management datasets such as CNRPark-EXT [7] and PKLOT [8].…”
Section: Introductionmentioning
confidence: 99%
“…The combined attention mechanism, CBAM (Convolutional Block Attention Module), emerges by integrating these two mechanisms [11] . CBAM attention mechanism achieves better results compared to SENet (Squeeze-and-Excitation Networks) which only focuses on channel attention [12] . The structure of CBAM attention mechanism is shown in Figure 3.…”
Section: Introducing the Cbam Modulementioning
confidence: 99%
“…In multi-vessel/ship detection and classification, the ability to represent the point of interest more accurately is important, and one way to do it is by utilizing attention. A convolutional block attention module (CBAM) [27] is one of the attention networks that has widely been used to improve the detection capabilities in many applications such as fly species recognition [28], bamboo sticks counting [29], safety helmets wear-ing recognition [30], and human activity recognition [31]. Therefore, by leveraging attention mechanisms such as focusing on essential features and suppressing irrelevant ones, we hope to boost the power of representation.…”
Section: Introductionmentioning
confidence: 99%