2018
DOI: 10.1007/978-3-030-01219-9_13
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Video Object Segmentation with Motion-Based Bilateral Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
104
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 140 publications
(104 citation statements)
references
References 34 publications
0
104
0
Order By: Relevance
“…In Table 3, several other deep learning based state-ofthe-art UVOS methods [9,52,24,53,33] leverage both appearance as well as extra motion information to improve the performance. Different from these methods, the proposed COSNet only utilizes appearance information but achieves superior performance.…”
Section: Quantitative and Qualitative Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In Table 3, several other deep learning based state-ofthe-art UVOS methods [9,52,24,53,33] leverage both appearance as well as extra motion information to improve the performance. Different from these methods, the proposed COSNet only utilizes appearance information but achieves superior performance.…”
Section: Quantitative and Qualitative Resultsmentioning
confidence: 99%
“…We also perform experiments on the FBMS dataset for completeness. Table 4 shows that our COSNet performs better (75.6% in mean J ) than stateof-the-art methods [14,42,24,21,30,32,33,49,9]. In most competing methods, except for the RGB input, additional optical flow information is utilized to estimate the segmentation mask.…”
Section: Quantitative and Qualitative Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent deep learning based methods learn more powerful video object features from large-scale training data, yielding a zero-shot solution [63] (still no annotation used for any testing frame). Many of these [7,57,21,58,31,55] employ two-stream networks to combine local motion and appearance information, and apply recurrent neural networks to model the dynamics in a frame-by-frame manner. Though these methods greatly promoted the development of this field and gained promising results, they generally suffer from two limitations.…”
Section: Introductionmentioning
confidence: 99%
“…Existing VOS methods can be divided into two settings based on the degrees of human involvement, namely, unsupervised and semi-supervised. The unsupervised VOS methods [49,44,17,32,29] do not require any manual annotation, while the semi-supervised * Corresponding author. methods [47,6,9,18] rely on the annotated mask for objects in the first frame.…”
Section: Introductionmentioning
confidence: 99%