CVPR 2011 2011
DOI: 10.1109/cvpr.2011.5995517
|View full text |Cite
|
Sign up to set email alerts
|

Learning to find occlusion regions

Abstract: For two consecutive frames in a video, we identify which pixels in the first frame become occluded in the second. Such general-purpose detection of occlusion regions is difficult and important because one-to-one correspondence of imaged scene points is needed for many tracking, video segmentation, and reconstruction algorithms. Our hypothesis is that an effective trained occlusion detector can be generated on the basis of i) a broad spectrum of visual features, and ii) representative but synthetic training seq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
48
1

Year Published

2011
2011
2020
2020

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 49 publications
(50 citation statements)
references
References 47 publications
1
48
1
Order By: Relevance
“…Another set of algorithms infer occlusion boundaries (Stein and Hebert 2009;He and Yuille 2010) and occluded regions (Humayun et al 2011) training a learning based detector using appearance, motion and depth features. The accuracy of these methods largely depends on the performance of the underlying feature detectors.…”
Section: Prior Related Workmentioning
confidence: 99%
“…Another set of algorithms infer occlusion boundaries (Stein and Hebert 2009;He and Yuille 2010) and occluded regions (Humayun et al 2011) training a learning based detector using appearance, motion and depth features. The accuracy of these methods largely depends on the performance of the underlying feature detectors.…”
Section: Prior Related Workmentioning
confidence: 99%
“…Detecting occlusion boundaries is a well studied problem, due to its usefulness in understanding the depth, motion and context of the scene [22,12]. Fleet et al [6] gave a Bayesian formulation where boundaries resulted from distinguishing local image motion.…”
Section: Related Researchmentioning
confidence: 99%
“…Therefore, pixels advected from reference frame I t by estimated flow F t→t+1 should correspond to the next frame I t+1 . This assumption breaks down at occlusion boundaries, hence high photoconsistency residual should be indicative of such boundaries [12,13]. Residual photo-consistency feature F PC is computed as…”
Section: Features For Occlusion Boundary Predictionmentioning
confidence: 99%
“…Specifically, we compute the difference in color, geometric context, and motion features for the regions on both sides of an edgelet. In addition, we compute the flow-consistency features along each edgelet boundary [9]. We use standard pairwise potential term over edgelet occlusion prediction.…”
Section: Occlusion Boundariesmentioning
confidence: 99%