2022
DOI: 10.1109/tpami.2020.3025814
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge-Guided Multi-Label Few-Shot Learning for General Image Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 141 publications
(72 citation statements)
references
References 60 publications
0
72
0
Order By: Relevance
“…In literature, a large number of traditional approaches have been proposed and they can be divided into exemplarbased methods [43,18] and regression-based methods [44]. In the past decade, deep neural networks (DNN) have achieved great success in various tasks [45,46,47,48,49,50], and many researchers have also applied DNN for sketch synthesis [51,52,53,54,55,56,33]. For example, Zhang et al [13] developed an end-to-end fully convolutional network to model the mapping between photos and sketches.…”
Section: Related Workmentioning
confidence: 99%
“…In literature, a large number of traditional approaches have been proposed and they can be divided into exemplarbased methods [43,18] and regression-based methods [44]. In the past decade, deep neural networks (DNN) have achieved great success in various tasks [45,46,47,48,49,50], and many researchers have also applied DNN for sketch synthesis [51,52,53,54,55,56,33]. For example, Zhang et al [13] developed an end-to-end fully convolutional network to model the mapping between photos and sketches.…”
Section: Related Workmentioning
confidence: 99%
“…It relies on a data augmentation strategy which generates synthesized feature vectors via label-set operations. KGGR (Chen et al 2020) uses a GCN to take label dependencies into account, where labels are modelled as nodes and two nodes are connected if the corresponding labels tend to co-occur. The strength of these la-bel dependencies is normally estimated from co-occurrence statistics, but for labels with limited training data, dependency strength is instead estimated based on GloVe word vectors (Pennington, Socher, and Manning 2014).…”
Section: Multi-label Few-shot Image Classificationmentioning
confidence: 99%
“…In ML-FSIC, this strategy is difficult to adopt, since each image may have multiple labels. The idea of setting N = |C base | during training and N = |C novel | during testing conforms to the strategy that was used by Alfassy et al (2019) and Chen et al (2020). However, Alfassy et al (2019) fix the number of training examples per label as K, with K ∈ {1, 5}, which has two important shortcomings.…”
Section: Problem Settingmentioning
confidence: 99%
“…Multi-label image recognition receives increasing attention (Wei et al 2016;Chen et al 2020) since it is more practical and necessary than its single-label counterpart. To solve this task, lots of efforts are dedicated to discovering discriminative local regions for feature enhancement by object proposal algorithms (Wei et al 2016;Yang et al 2016) or visual attention mechanisms (Ba, Mnih, and Kavukcuoglu 2014;Chen et al 2018b).…”
Section: Related Workmentioning
confidence: 99%
“…Recently, lots of efforts (Chen et al 2019c(Chen et al ,a, 2020 are dedicated to the task of multi-label image recognition as it benefits various applications ranging from content-based image retrieval and recommendation systems to surveillance systems and assistive robots. Despite achieving impressive progress, current leading algorithms (Chen et al 2019c(Chen et al ,a, 2020 introduce data-hungry deep convolutional networks (He et al 2016;Simonyan and Zisserman 2015) to learn discriminative features, and thus they depend on collecting large-scale clean and complete multi-label datasets. However, it is very time-consuming to collect a consistent and exhaustive list of labels for every image, making collecting clean and complete multi-label annotations more diffi- Figure 1: Two examples of images with partial labels (unknown labels are highlighted in red).…”
Section: Introductionmentioning
confidence: 99%