2022
DOI: 10.1007/978-3-031-20071-7_15
|View full text |Cite
|
Sign up to set email alerts
|

FILM: Frame Interpolation for Large Motion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 75 publications
(41 citation statements)
references
References 30 publications
0
41
0
Order By: Relevance
“…Extending to Frame Interpolation. Our method can be potentially modified and improved for frame interpolation using only two input images [Reda et al 2022]. As a proof of concept, we take two human portrait images of the same person and deploy our method on those images.…”
Section: 24mentioning
confidence: 99%
“…Extending to Frame Interpolation. Our method can be potentially modified and improved for frame interpolation using only two input images [Reda et al 2022]. As a proof of concept, we take two human portrait images of the same person and deploy our method on those images.…”
Section: 24mentioning
confidence: 99%
“…Moreover, instead of predicting target texture directly from deep feature like most other VFI algorithms [5], [8], [14], [11], [12], [20], we propose a new context synthesis module (CSM) to simplify the frame synthesis in each level by borrowing existing texture from adjacent input image pyramids, which means that decoder D l of PMCRNet only needs to predict easier intermediate optical flow F l−1 t→0 , F l−1 t→1 , one-channel occlusion merge mask M l−1 and three-channel image residual R l−1 as shown in Fig. 1.…”
Section: B Joint Refinement Decodermentioning
confidence: 99%
“…Based on above analysis, a motion context joint refinement based PMCRNet has been built for VFI. To optimize this network, we employ the Charbonnier loss [29] ρ to replace the L 1 loss like many existing methods [5], [6], [11], [12], [20]. Besides, inspired by the robustness of the census loss L cen in unsupervised optical flow estimation [30], [31], we add it as a complementary loss term, which calculates the soft Hamming distance between census-transformed image patches of size 7×7.…”
Section: Loss Functionmentioning
confidence: 99%
See 2 more Smart Citations