2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01422
|View full text |Cite
|
Sign up to set email alerts
|

XVFI: eXtreme Video Frame Interpolation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
84
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 130 publications
(84 citation statements)
references
References 39 publications
0
84
0
Order By: Relevance
“…(a) overlaid inputs (b) SoftSplat [44] (c) XVFI [57] (d) Ours Figure 8. Qualitative comparison of our proposed approach with two representative methods on a sample from the XTEST-2K [57] test dataset.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…(a) overlaid inputs (b) SoftSplat [44] (c) XVFI [57] (d) Ours Figure 8. Qualitative comparison of our proposed approach with two representative methods on a sample from the XTEST-2K [57] test dataset.…”
Section: Methodsmentioning
confidence: 99%
“…(a) overlaid inputs (b) SoftSplat [44] (c) XVFI [57] (d) Ours Figure 8. Qualitative comparison of our proposed approach with two representative methods on a sample from the XTEST-2K [57] test dataset. While these sophisticated interpolation methods are unable to handle this challenging scenario with the utility pole subject to large motion, our comparatively simple approach is able to generate a plausible result.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Handling large motion is an important yet under-explored topic in frame interpolation. The work in [32] handles large motion by training on 4K sequences with extreme motion. While this is a viable approach, it does not generalize well on regular footage as discussed in [26].…”
Section: Related Workmentioning
confidence: 99%
“…Second, at test time, models perform well when the motion range matches that of the training datasets', but generalize poorly on more extreme motion. One could approach this with data that captures the desired range [32]. We instead propose a network that generalizes well in small and large motion.…”
Section: Introductionmentioning
confidence: 99%