2021
DOI: 10.1109/tcsvt.2020.2981964
|View full text |Cite
|
Sign up to set email alerts
|

Robust Video Frame Interpolation With Exceptional Motion Map

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…Liu et al [31] proposed a cycle consistency neural network in which the synthesized frames are asserted to be more reliable if they could be used to reconstruct the input frames accurately. Park et al [32] proposed a VFI method by considering the exceptional motion patterns. Lee et al [33] proposed a new warping module, namely adaptive collaboration of flows (AdaCoF), to estimate both kernel weights and offset vectors for each target pixel to synthesize the missing frame.…”
Section: Tsr For Videomentioning
confidence: 99%
“…Liu et al [31] proposed a cycle consistency neural network in which the synthesized frames are asserted to be more reliable if they could be used to reconstruct the input frames accurately. Park et al [32] proposed a VFI method by considering the exceptional motion patterns. Lee et al [33] proposed a new warping module, namely adaptive collaboration of flows (AdaCoF), to estimate both kernel weights and offset vectors for each target pixel to synthesize the missing frame.…”
Section: Tsr For Videomentioning
confidence: 99%
“…Interpolated frames, whose quality highly depends on the accuracy of the computationally expensive optical flow computation [32], typically suffer from motion boundaries and severe occlusions thus showing strong artifacts, even with state-of-the-art optical flow algorithms [33]. More recent promising works rely on neural networks to either predict convolution kernels for each pixel used to generate the interpolated frames [34] or leverage optical flow fields with exceptional motion maps [35]. However, these techniques involve a large number of convolutions, sometimes with large kernels (up to 41x41 for each pixel) to cope with large motion, thus making the computational demand unsuitable for real-rime use-cases.…”
Section: Motion Blur Rendering and Video Frame Interpolationmentioning
confidence: 99%
“…The rapid pace of CNN research will certainly edge performance further ahead in the years to come. While this paper has been in review, this kind of exploration is beginning to emerge in the literature [54, 60, 61] with various CNN‐derived post‐processing strategies able to contribute another 1.3 dB [61] on to existing systems.…”
Section: Final Commentsmentioning
confidence: 99%