2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.87
|View full text |Cite
|
Sign up to set email alerts
|

Bilateral Space Video Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
206
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 206 publications
(206 citation statements)
references
References 33 publications
0
206
0
Order By: Relevance
“…For single object VOS, we compare our RANet with 6 state-of-the-art OL based and 11 offline methods [1, 3, 8-10, 19, 22, 23, 35, 37, 38, 40, 45, 49-51, 59] in Table 1, including OSVOS-S [37], PReMVOS [35], RGMP [38], FEELVOS [49], etc. To evaluate our RANet trained with static images, we compare it with some methods [22,23,36,40,47] without using DAVIS training set. For multi-object VOS, we compare with some state-of-theart offline methods [3,9,19,50,59], and also list results of some OL based methods [1,3,19,37,50] for reference.…”
Section: Comparison To the State Of The Artmentioning
confidence: 99%
“…For single object VOS, we compare our RANet with 6 state-of-the-art OL based and 11 offline methods [1, 3, 8-10, 19, 22, 23, 35, 37, 38, 40, 45, 49-51, 59] in Table 1, including OSVOS-S [37], PReMVOS [35], RGMP [38], FEELVOS [49], etc. To evaluate our RANet trained with static images, we compare it with some methods [22,23,36,40,47] without using DAVIS training set. For multi-object VOS, we compare with some state-of-theart offline methods [3,9,19,50,59], and also list results of some OL based methods [1,3,19,37,50] for reference.…”
Section: Comparison To the State Of The Artmentioning
confidence: 99%
“…We compare the proposed STCNN method to 11 stateof-the-art semi-supervised algorithms, namely BVS [34], JFS [35], SCF [20], MRFCNN [2], LT [25], OSVOS [3], MSK [38], OFL [46], CRN [15], DRL [12], and OnAVOS [47] in Table 2. As shown in Table 2, we observe that the STCNN method produces the best results with 0.796 mean IoU, which surpasses the state-of-the-art results, i.e., MR-FCNN [2] (0.784 mean IoU), with 0.012 mIoU.…”
Section: Youtube-objects Datasetmentioning
confidence: 99%
“…The symbol ↑ means higher scores indicate better performance. Bold font indicates the best result.Method BVS[34] JFS[35] SCF[20] MRFCNN[2] LT[25] OSVOS[3] MSK[38] OFL[46] CRN[15] DRL[12] OnAVOS[47] Ours…”
mentioning
confidence: 99%
“…Moving away from the single-object hypothesis of DAVIS 2016, these datasets are increasingly focused on the segmentation of multiple objects, which increases the need for a userprovided annotation to specify each object of interest and has led to the development of more semi-supervised VOS methods using an annotated frame. With some exceptions [1,13,27,32], the majority of semi-supervised VOS methods use an artificial neural network.…”
Section: Video Object Segmentationmentioning
confidence: 99%