2022
DOI: 10.1609/aaai.v36i3.20196
|View full text |Cite
|
Sign up to set email alerts
|

Dual Attention Networks for Few-Shot Fine-Grained Recognition

Abstract: The task of few-shot fine-grained recognition is to classify images belonging to subordinate categories merely depending on few examples. Due to the fine-grained nature, it is desirable to capture subtle but discriminative part-level patterns from limited training data, which makes it a challenging problem. In this paper, to generate fine-grained tailored representations for few-shot recognition, we propose a Dual Attention Network (Dual Att-Net) consisting of two dual branches of both hard- and soft-attention… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…CubMeta [43] introduced the concept of curriculum learning into meta-learning and proposed an effective self-paced meta-learning method to obtain stronger meta-learners for few-shot classifications. DUAL ATT-NET [47] adopted a dual-attention to explicitly model the crucial relation of fine-grained parts and implicitly captures discriminative while subtle fine-grained details. While these methods are all based on CNNs for feature embedding, most recent works exploited GNN for more effective modeling of inter-and intra-class relations in few-shot classification.…”
Section: A Few-shot Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…CubMeta [43] introduced the concept of curriculum learning into meta-learning and proposed an effective self-paced meta-learning method to obtain stronger meta-learners for few-shot classifications. DUAL ATT-NET [47] adopted a dual-attention to explicitly model the crucial relation of fine-grained parts and implicitly captures discriminative while subtle fine-grained details. While these methods are all based on CNNs for feature embedding, most recent works exploited GNN for more effective modeling of inter-and intra-class relations in few-shot classification.…”
Section: A Few-shot Learningmentioning
confidence: 99%
“…Attention Mechanism [50] aims to focus on image regions that are more task-related by learning a binary matrix or a weighted matrix. In particular, self-attention [47], [51]- [53] considers the inherent correlation (attention) of the input features itself, which is mostly applied in deep models. In GCN scenarios, GAT [54] used a graph attention layer to learn a weighted parameter vector based on entire neighborhoods to update node representation.…”
Section: Attention Mechanism On Graph Modelsmentioning
confidence: 99%
“…We reported the average accuracy (%) for 600 randomly generated episodes, along with the 95% confidence interval on the test set, following the approach commonly used in most methods (Jankowski et al, 2011;Nichol et al, 2018;Xu et al, 2022). Our model was trained end-to-end, without any pre-training process.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…These methods learn an initialization that can be fine-tuned quickly on a new task with a small amount of data. Attention-based methods, e.g., MultiAtt [33], MattML [46] and Dual Att-Net [41], use attention mechanisms to identify the most informative parts or regions of the input images. These methods aim to learn a feature representation that is not only discriminative but also informative for few-shot recognition.…”
Section: Related Workmentioning
confidence: 99%