2018
DOI: 10.1007/978-3-030-01267-0_17
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Prior Networks for Optical Flow

Abstract: Classical computation of optical flow involves generic priors (regularizers) that capture rudimentary statistics of images, but not long-range correlations or semantics. On the other hand, fully supervised methods learn the regularity in the annotated data, without explicit regularization and with the risk of overfitting. We seek to learn richer priors on the set of possible flows that are statistically compatible with an image. Once the prior is learned in a supervised fashion, one can easily learn the full m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
26
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 40 publications
(26 citation statements)
references
References 46 publications
0
26
0
Order By: Relevance
“…In order to exploit a previously observed dataset D, we use a Conditional Prior Network (CPN) [35] in our framework. Conditional Prior Networks infer the probability of an optical flow given a single image.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to exploit a previously observed dataset D, we use a Conditional Prior Network (CPN) [35] in our framework. Conditional Prior Networks infer the probability of an optical flow given a single image.…”
Section: Methodsmentioning
confidence: 99%
“…Instead of having a separate pose network or using direct visual odometry methods, [20] uses Perspective-n-Point (PnP) [18] and Random Sample Consensus (RANSAC) [8] to obtain pose. We exploit recently introduced method to learn the conditional prior [35] to take into account scene semantics rather than using a local smoothness assumption.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Liu et al [ 34 ] enriched the training data with artificial occlusions, significantly improving occlusion performance. Of great interest in this area is the work of Yang et al [ 35 ], where they proposed a Conditional Prior Network that regularizes the output based on the input. In essence, the network captures the possible motion space of a given single input image.…”
Section: Related Workmentioning
confidence: 99%
“…This is typically implemented as an additive loss term. Unlike the works in [ 17 , 18 , 19 , 20 ], where only low level image features are used, our method utilizes high-level semantic object segmentation information to calculate the regularization term and in contrast to [ 33 , 35 ] its computation is simpler and requires no training.…”
Section: Related Workmentioning
confidence: 99%