2016
DOI: 10.1007/978-3-319-46466-4_10
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Semantic Information and Deep Matching for Optical Flow

Abstract: We tackle the problem of estimating optical flow from a monocular camera in the context of autonomous driving. We build on the observation that the scene is typically composed of a static background, as well as a relatively small number of traffic participants which move rigidly in 3D. We propose to estimate the traffic participants using instance-level segmentation. For each traffic participant, we use the epipolar constraints that govern each independent motion for faster and more accurate estimation. Our se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
103
0
1

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 92 publications
(105 citation statements)
references
References 40 publications
1
103
0
1
Order By: Relevance
“…In the context of image matching, deep matching networks were successfully trained for tasks such as stereo estimation [32,33], optical flow estimation [34,35], aerial image matching [36] or ground to aerial image matching [37]. In [38], a deep learning-based method is proposed to detect and match multiscale keypoints with two separated networks.…”
Section: Related Workmentioning
confidence: 99%
“…In the context of image matching, deep matching networks were successfully trained for tasks such as stereo estimation [32,33], optical flow estimation [34,35], aerial image matching [36] or ground to aerial image matching [37]. In [38], a deep learning-based method is proposed to detect and match multiscale keypoints with two separated networks.…”
Section: Related Workmentioning
confidence: 99%
“…The proposed algorithm successfully exploits the temporal displacement of a third image to accurately recover camera motion and also delivers high performing optical flow and disparity estimation results even though only the general motion is computed, no pre-computed optical flow is used, and no convolutional neural network (e.g. [19,5,2]) or prior 3D models (e.g. cars) are used.…”
Section: Discussionmentioning
confidence: 99%
“…The plane D p has two parameters: a 3D unit normal vectorn p = (n x p , n y p , n z p ) and disparity d p . The disparity of pixel q = (x q , y q ) using D p is given by: Dp(q) = a * xq + b * yq + c (2) where a = −n [4]. C p is a function that measures the similarity/dissimilarity of three pixels, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, 2D optical flow benchmarks have been dominated by label-based methods [7,24], propagation methods [4,18], neural regression networks [10] and models that exploit scene-specific properties like semantics [35,3]. Most of these models do not scale well to the volumetric domain and struggle heavily with memory consumption.…”
Section: Related Workmentioning
confidence: 99%