2018
DOI: 10.48550/arxiv.1811.00684
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SDCNet: Video Prediction Using Spatially-Displaced Convolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…On the contrary, motion-based methods [8,9] excel in making sharp predictions, yet fail in occlusion areas where motion predictions are erroneous or ill-defined. Meanwhile, Reda et al [34] propose to model moving appearances with both convolutional kernels as in [10] and vectors as optical flow. Our closest prior work is [11] which also composes the pixel-and flow-based predictions through occlusion maps.…”
Section: High-fidelity Video Predictionmentioning
confidence: 99%
“…On the contrary, motion-based methods [8,9] excel in making sharp predictions, yet fail in occlusion areas where motion predictions are erroneous or ill-defined. Meanwhile, Reda et al [34] propose to model moving appearances with both convolutional kernels as in [10] and vectors as optical flow. Our closest prior work is [11] which also composes the pixel-and flow-based predictions through occlusion maps.…”
Section: High-fidelity Video Predictionmentioning
confidence: 99%