2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01019
|View full text |Cite
|
Sign up to set email alerts
|

Refining activation downsampling with SoftPool

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
75
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 187 publications
(75 citation statements)
references
References 34 publications
0
75
0
Order By: Relevance
“…As a result, the model may fail to represent every attribute of an entity accurately. In practice, we adopt an exponentially weighted pooling method similar to softpool (Stergiou et al, 2021):…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…As a result, the model may fail to represent every attribute of an entity accurately. In practice, we adopt an exponentially weighted pooling method similar to softpool (Stergiou et al, 2021):…”
Section: Proposed Methodsmentioning
confidence: 99%
“…We also treat the known types of the central entity as its neighbors to use them to infer the missing types. To aggregate the inference results generated by N2T and Agg2T mechanism, we adopt a carefully designed pooling method similar to softpool (Stergiou et al, 2021). Experiments show that this pooling method can produce stable and interpretable inference results.…”
Section: Introductionmentioning
confidence: 99%
“…For some datasets, the data imbalance problem can lead to a certain bias in the classification results of the deep learning model, i.e., it is biased towards the category with more data, and there are some methods for dealing with the data imbalance problem, such as oversampling and undersampling, etc. The method we use is SMOTE sampling [40], which is a method for constructing artificial samples, and SMOTE is an oversampling algorithm, which constructs new samples of small classes instead of generating copies of samples that already exist in the small classes, i.e., the algorithm constructs data that are new samples and do not exist in the original dataset. The basic idea is to find the K nearest neighbor samples according to the Euclidean distance for each small sample class, randomly select a sample point, and randomly select a point on the line segment between the sample point and the nearest neighbor sample point in the feature space.…”
Section: Swin Transformer Blockmentioning
confidence: 99%
“…Local Importance Pooling (LIP) [ 42 ] utilizes learned weights as a subnetwork attention-based mechanism. SoftPool [ 43 ] downsamples the activation map based on the exponentially weighted sum of the original pixels within the kernel region (softmax normalization). This can improve the representation of high-contrast regions and present around object edges or specific feature activations.…”
Section: Related Workmentioning
confidence: 99%