2021
DOI: 10.3390/electronics10232987
|View full text |Cite
|
Sign up to set email alerts
|

Two-Branch Attention Learning for Fine-Grained Class Incremental Learning

Abstract: As a long-standing research area, class incremental learning (CIL) aims to effectively learn a unified classifier along with the growth of the number of classes. Due to the small inter-class variances and large intra-class variances, fine-grained visual categorization (FGVC) as a challenging visual task has not attracted enough attention in CIL. Therefore, the localization of critical regions specialized for fine-grained object recognition plays a crucial role in FGVC. Additionally, it is important to learn fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…In [31],the proposed remote sensing image defogging network consists of both encoding and decoding, and the dual self-attention module is applied to the feature enhancement of the output feature maps of the encoding stage. It improved the definition of foggy images effectively.Zhong et al [32] integrated a dual attention network composed of position attention and channel attention into the feature extraction network, which enhanced the robustness of backbone network and achieves higher accuracy in person reidentification tasks.Guo et al [33] proposed a TBAL-Net using attention mechanism to learn fine-grained feature representation, which is an effective training framework for fine-grained class incremental learning(CIL).…”
Section: Related Work Attention Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…In [31],the proposed remote sensing image defogging network consists of both encoding and decoding, and the dual self-attention module is applied to the feature enhancement of the output feature maps of the encoding stage. It improved the definition of foggy images effectively.Zhong et al [32] integrated a dual attention network composed of position attention and channel attention into the feature extraction network, which enhanced the robustness of backbone network and achieves higher accuracy in person reidentification tasks.Guo et al [33] proposed a TBAL-Net using attention mechanism to learn fine-grained feature representation, which is an effective training framework for fine-grained class incremental learning(CIL).…”
Section: Related Work Attention Networkmentioning
confidence: 99%
“…Considering the limitation originates from the fixed structures of CNNs, novel convolution operators were proposed to improve the learning of spatial transformations. Dilated convolutions [33] was able to aggregates contextual information from the expanded receptive field. In [35], deformable convolution was proposed to sample spatial locations with additional self-learned offsets.…”
Section: Deformable Convolution Networkmentioning
confidence: 99%