2019
DOI: 10.1007/978-3-030-20870-7_38
|View full text |Cite
|
Sign up to set email alerts
|

Unseen Object Segmentation in Videos via Transferable Representations

Abstract: In order to learn object segmentation models in videos, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that bet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 44 publications
(84 reference statements)
0
1
0
Order By: Relevance
“…For that reason, the well-trained models on one dataset do not perform necessarily well when applied to another. As is stated in [21,3,30] The issue of degeneration in VOS models has existed for a while, especially for off-line methods where Figure 1: Predictions between the model trained with and without our UDA method on FBMS59 [20] and Youtube-Object [24]. The optical flow (left) indicate the motion of objects.…”
Section: Introductionmentioning
confidence: 94%
“…For that reason, the well-trained models on one dataset do not perform necessarily well when applied to another. As is stated in [21,3,30] The issue of degeneration in VOS models has existed for a while, especially for off-line methods where Figure 1: Predictions between the model trained with and without our UDA method on FBMS59 [20] and Youtube-Object [24]. The optical flow (left) indicate the motion of objects.…”
Section: Introductionmentioning
confidence: 94%