2014
DOI: 10.1007/s11263-014-0776-7
|View full text |Cite
|
Sign up to set email alerts
|

Labeling Complete Surfaces in Scene Understanding

Abstract: Scene understanding requires reasoning about both what we can see and what is occluded. We offer a simple and general approach to infer labels of occluded background regions. Our approach incorporates estimates of visible surrounding background, detected objects, and shape priors from transferred training regions. We demonstrate the ability to infer the labels of occluded background regions in three datasets: the outdoor StreetScenes dataset, IndoorScene dataset and SUN09 dataset, all using the same approach. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…In addition to these transformation invariant features, it is common in building component detection to compute several absolute metrics from the segments. Popular features include the segment's orientation in relation to the gravity or Z-axis, surface area, dimensions and the aspect ratio [4,5,11,[62][63][64][65]. The most frequently used features are summarized in the right column of Table 1.…”
Section: Feature Extractionmentioning
confidence: 99%
“…In addition to these transformation invariant features, it is common in building component detection to compute several absolute metrics from the segments. Popular features include the segment's orientation in relation to the gravity or Z-axis, surface area, dimensions and the aspect ratio [4,5,11,[62][63][64][65]. The most frequently used features are summarized in the right column of Table 1.…”
Section: Feature Extractionmentioning
confidence: 99%
“…But errors in layered segmentation can severely degrade the recovery of the occluded region. Learning-based approaches [13,6,44] have posed recovery from occlusion as a 2D semantic segmentation completion task. Ehsani et al [6] propose to complete the silhouette and texture of an occluded object.…”
Section: Related Workmentioning
confidence: 99%
“…Even multi-view approaches [34,12,19] may fail to recover complete shape, since occlusions may block most views of the object. Single-view learning-based methods [13,6,44] have approached seeing beyond occlusion as a 2D semantic segmentation completion task, but complete 3D shape recovery adds the challenges of predicting 3D shape from a 2D image and being robust to the unknown existence and extent of an occluding region.…”
Section: Introductionmentioning
confidence: 99%
“…To do so, a dataset of labels exemplars is required. In [14] authors proposed to find the bounding boxes of the objects using object detectors and then regions are classified by combining information from detectors and surface occlusions; they also use RGBdepth to understand the scene.…”
Section: Related Workmentioning
confidence: 99%