Procedings of the British Machine Vision Conference 2017 2017
DOI: 10.5244/c.31.167
|View full text |Cite
|
Sign up to set email alerts
|

One-Shot Learning for Semantic Segmentation

Abstract: Low-shot learning methods for image classification support learning from sparse data. We extend these techniques to support dense semantic image segmentation. Specifically, we train a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN). We use this FCN to perform dense pixel-level prediction on a test image for the new semantic class. Our architecture shows a 25% relative meanIoU improvement compared to the best baseline methods for one-shot segmenta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
438
0
1

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 503 publications
(456 citation statements)
references
References 27 publications
0
438
0
1
Order By: Relevance
“…For evaluation, we use two datasets: (a) PASCAL-5 i which combines images from the PASCAL VOC 2012 [7] and Extended SDS [11] datasets; and (b) COCO-20 i which is based on the MSCOCO dataset [16]. For PASCAL-5 i , we use the same 4-fold cross-validation setup as prior work [26,20,6]. Specifically, from the 20 object classes in PASCAL VOC 2012, for each fold i = 0, ..., 3, we sample five as test classes, and use the remaining 15 classes for training.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…For evaluation, we use two datasets: (a) PASCAL-5 i which combines images from the PASCAL VOC 2012 [7] and Extended SDS [11] datasets; and (b) COCO-20 i which is based on the MSCOCO dataset [16]. For PASCAL-5 i , we use the same 4-fold cross-validation setup as prior work [26,20,6]. Specifically, from the 20 object classes in PASCAL VOC 2012, for each fold i = 0, ..., 3, we sample five as test classes, and use the remaining 15 classes for training.…”
Section: Methodsmentioning
confidence: 99%
“…Metrics. As in [26,20,6], we use the mean intersectionover-union (mIoU) for quantitative evaluation. IoU of class l is defined as IoU l = T P l T P l +F P l +F N l , where T P, F P and F N are the number of pixels that are true positives, false positives and false negatives of the predicted segmentation masks, respectively.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Segmentation masks with dashed border denote ground truth annotations. classification [25,23,24,18,6,20,12,14] and a few targeting at segmentation tasks [21,17,4,28,4,8].…”
Section: Introductionmentioning
confidence: 99%