2021
DOI: 10.48550/arxiv.2104.06394
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

All you need are a few pixels: semantic segmentation with PixelPick

Abstract: A central challenge for the task of semantic segmentation is the prohibitive cost of obtaining dense pixel-level annotations to supervise model training. In this work, we show that in order to achieve a good level of segmentation performance, all you need are a few well-chosen pixel labels.We make the following contributions: (i) We investigate the semantic segmentation setting in which labels are supplied only at sparse pixel locations, and show that deep neural networks can use a handful of such labels to go… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 72 publications
0
1
0
Order By: Relevance
“…It achieves an mIoU of 72.1 % on the 500 images reserved for testing. Following other works on Cityscapes [18,20], we downsample all images to a resolution of 256 × 512. We train all autoencoders for 50 epochs with a learning rate of 0.001, reducing the learning rate by a factor of 10 if the loss plateaus for ten epochs.…”
Section: Baseline Comparisonmentioning
confidence: 99%
“…It achieves an mIoU of 72.1 % on the 500 images reserved for testing. Following other works on Cityscapes [18,20], we downsample all images to a resolution of 256 × 512. We train all autoencoders for 50 epochs with a learning rate of 0.001, reducing the learning rate by a factor of 10 if the loss plateaus for ten epochs.…”
Section: Baseline Comparisonmentioning
confidence: 99%