2018
DOI: 10.1007/978-3-030-01258-8_38
|View full text |Cite
|
Sign up to set email alerts
|

Occlusions, Motion and Depth Boundaries with a Generic Network for Disparity, Optical Flow or Scene Flow Estimation

Abstract: Occlusions play an important role in disparity and optical flow estimation, since matching costs are not available in occluded areas and occlusions indicate depth or motion boundaries. Moreover, occlusions are relevant for motion segmentation and scene flow estimation. In this paper, we present an efficient learning-based approach to estimate occlusion areas jointly with disparities or optical flow. The estimated occlusions and motion boundaries clearly improve over the state-of-the-art. Moreover, we present n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
220
2

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 189 publications
(222 citation statements)
references
References 50 publications
(123 reference statements)
0
220
2
Order By: Relevance
“…When applied to down-scaled images, these methods run faster, but gives blurry results and inaccurate disparity estimates for the far-field. Recent "deep" stereo methods perform well on low-resolution benchmarks [5,11,16,21,38], while failing to produce SOTA results on high-res benchmarks [26]. This is likely due to: 1) Their architectures are not efficiently designed to operate on high-resolution images.…”
Section: Introductionmentioning
confidence: 99%
“…When applied to down-scaled images, these methods run faster, but gives blurry results and inaccurate disparity estimates for the far-field. Recent "deep" stereo methods perform well on low-resolution benchmarks [5,11,16,21,38], while failing to produce SOTA results on high-res benchmarks [26]. This is likely due to: 1) Their architectures are not efficiently designed to operate on high-resolution images.…”
Section: Introductionmentioning
confidence: 99%
“…For object detection, we use both the recurrent rolling convolution (RRC) detector [29] and Track R-CNN [38]. We use optical flow obtained from [16]. Bounding Boxes to Segmentation Masks.…”
Section: Our Approachmentioning
confidence: 99%
“…We train our proposed network with and without the self-supervision loss for the flow network. The officially provided pretrained model on FlyingChair dataset [10] is used if the self-supervision loss is disabled. Experimental results demonstrate that the flow network pretrained on the FlyingChair dataset [10] can generalize to our dataset, but with limited performance.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The officially provided pretrained model on FlyingChair dataset [10] is used if the self-supervision loss is disabled. Experimental results demonstrate that the flow network pretrained on the FlyingChair dataset [10] can generalize to our dataset, but with limited performance. The resulting deblur network gives a PSNR metric as 31.23dB and a SSIM metric as 0.89 on our synthetic dataset, in contract to 32.24dB/0.91 if the network is trained in a fully selfsupervised manner.…”
Section: Evaluation Metricsmentioning
confidence: 99%