2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00967
|View full text |Cite
|
Sign up to set email alerts
|

Meta R-CNN: Towards General Solver for Instance-Level Low-Shot Learning

Abstract: Resembling the rapid learning capability of human, lowshot learning empowers vision systems to understand new concepts by training with few samples. Leading approaches derived from meta-learning on images with a single visual object. Obfuscated by a complex background and multiple objects in one image, they are hard to promote the research of low-shot object detection/segmentation. In this work, we present a flexible and general methodology to achieve these tasks. Our work extends Faster /Mask R-CNN by proposi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
581
1
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 438 publications
(586 citation statements)
references
References 41 publications
3
581
1
1
Order By: Relevance
“…[35] leveraged fully labeled base classes and quickly adapted them to novel classes using a metafeature learner and a reweighting module within a one-stage detection architecture. Analogously, [42] reweighted ROI features in the detection head with channelwise attention based on a twostage framework. Recent work [51] proposes a prototypical knowledge transfer with an attached meta-learner as well.…”
Section: Few-shot Object Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…[35] leveraged fully labeled base classes and quickly adapted them to novel classes using a metafeature learner and a reweighting module within a one-stage detection architecture. Analogously, [42] reweighted ROI features in the detection head with channelwise attention based on a twostage framework. Recent work [51] proposes a prototypical knowledge transfer with an attached meta-learner as well.…”
Section: Few-shot Object Detectionmentioning
confidence: 99%
“…Pascal VOC annotates more than 30k images from 20 categories, and MS COCO has a more diverse set of 80 categories with 200k images, containing all 20 categories of Pascal VOC. To ensure the reliability of the experiment, we adopt the same settings as other SOTA FSOD detectors [35,41,42] to construct a fewshot detection dataset. We carried out three groups of experiments, i.e., experiments over the Pascal VOC dataset, experiments over the MS COCO dataset and experiments over the cross-benchmark set from MS COCO to PASCAL VOC.…”
Section: A Dataset Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…As active learning algorithms select the most informative-to-learn instances from abundant unlabeled training data and train a deep learning model with the selected data first, it is possible to significantly reduce the amount of training data and the human effort required for DB development (Kim et al, 2020b). Other researchers also investigated the few-shot learning algorithm (Kang et al, 2019;Wang et al, 2019c;Yan et al, 2019a), which aims to learn and detect new types of target objects even if a small amount of training data are given, i.e., less than 30 images. This would be very beneficial for construction sites where diverse types of construction resources exist and they often vary from phase to phase.…”
Section: Database-free Vision-based Monitoringmentioning
confidence: 99%
“…Pioneers of few-shot detection have explored background depression [15], metric learning [16] and feature reweighting [17], [18]. Most of these previous works carry out few-shot detection implicitly, either by manipulating the feature maps to highlight regions [15] or channels [17], [18] related to novel classes, or by constraining different classes to be separable in a learned embedding space [16]. Besides, [19] proposes a simple finetuning baseline for few-shot detection, introducing the instance-level feature normalization when finetuning on novel classes.…”
Section: Introductionmentioning
confidence: 99%