2020
DOI: 10.1109/access.2020.2997078
|View full text |Cite
|
Sign up to set email alerts
|

Global Learnable Pooling With Enhancing Distinctive Feature for Image Classification

Abstract: Pooling layers appear widely in deep networks for its aggregating information in a local region and fast downsampling. Due to the reason that the closer to the output layer, the more the network learns is the high-level semantic information related to classification, the global average pooling would inhibit the contribution of local high magnitudes features in the global region. Besides, the gradient of the distinctive features is considerably attenuated due to the large region size of global average pooling. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…SVP outperforms conventional pooling methods such as GAP and GMP, and also it is more robust against noise. In addition, SVP is experimentally proven to provide further performance enhancement when it is applied to an existing knowledge distillation scheme such as [17]. On the other hand, SVP has a disadvantage that its computational complexity is somewhat higher than that of general pooling methods.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…SVP outperforms conventional pooling methods such as GAP and GMP, and also it is more robust against noise. In addition, SVP is experimentally proven to provide further performance enhancement when it is applied to an existing knowledge distillation scheme such as [17]. On the other hand, SVP has a disadvantage that its computational complexity is somewhat higher than that of general pooling methods.…”
Section: Discussionmentioning
confidence: 99%
“…Since global pooling was first introduced in [16], it has evolved in various ways, such as employing trainable variables [17] and adopting non-linear functions [18], [19]. On the other hand, a global pooling method using SVD was proposed [20], which was based on covariance pooling.…”
Section: Related Workmentioning
confidence: 99%
“…Convolutional Block Attention Module (CBAM) [ 131 ] contains two attention mechanisms: channel attention module followed by spatial attention module, where the channel attention module aims to capture the inter-channel relationship of features; on the other hand, the spatial attention module aims to capture the inter-spatial relationship of features. Global Learnable Pooling (GLPool) [ 147 ] can also be considered as an attention mechanism, where the weight of each pooling location is considered as a parameter and learned together with other network parameters in an end-to-end manner. [ 36 ] mathematically shows that the attention weighted pooling is equivalent to a low-rank approximation of second-order pooling.…”
Section: Pooling Methodsmentioning
confidence: 99%
“…Global average pooling (GAP) is a global average of all pixels in the feature map of each channel and obtains the output of each feature map [39][40][41]. GAP directly removes the features of black box in the fully connected layer and gives each channel practical significance; then, the vectors composed of these output features will be sent to the classifier for classification directly [42]. Figure 6 shows the comparison between the fully connected layer and the global average pool layer.…”
Section: Global Averagementioning
confidence: 99%