2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00946
|View full text |Cite
|
Sign up to set email alerts
|

Scene-Adaptive Video Frame Interpolation via Meta-Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 46 publications
(17 citation statements)
references
References 24 publications
0
17
0
Order By: Relevance
“…Recently, [29], [42], [45], [52] addressed the motion blur and motion aliasing of complex scenes in temporal frame interpolation. Choi et al [5] proposed a novel scene adaptation framework to further improve the frame interpolation models via meta-learning. Instead of synthesizing the intermediate LR frames as current VFI methods do, our one-stage framework interpolates features from two neighboring LR frames to directly synthesize LR feature maps for missing frames without explicit supervision.…”
Section: Video Frame Interpolationmentioning
confidence: 99%
“…Recently, [29], [42], [45], [52] addressed the motion blur and motion aliasing of complex scenes in temporal frame interpolation. Choi et al [5] proposed a novel scene adaptation framework to further improve the frame interpolation models via meta-learning. Instead of synthesizing the intermediate LR frames as current VFI methods do, our one-stage framework interpolates features from two neighboring LR frames to directly synthesize LR feature maps for missing frames without explicit supervision.…”
Section: Video Frame Interpolationmentioning
confidence: 99%
“…Some recent researches enhance the performance of interpolation by making use of auxiliary information (e.g., more reference frames [39,47] and high frame rate video with low spatial resolution [65]). Besides, a good initialization and fine-tuning of pre-trained sub-networks (such as PWC-Net [17], RBPN [62], Megadepth [63]) can greatly help to produce high quality interpolation results.…”
Section: Discussion and Limitationsmentioning
confidence: 99%
“…Xiang et al [46] proposed a one-stage space-time video super-resolution for jointly frame interpolation and superresolution. Choi et al [47] proposed to improve the performance of an interpolation algorithm by incorporating meta-learning.…”
Section: Single Frame Interpolationmentioning
confidence: 99%
“…Thus, we adopt a lightweight optical flow network [31] on LR frames and a flow refine network [26] to get the middle flow on HR frames, and we try a new supervised flow loss to achieve better perception. Recently, meta-learning is also introduced into frame interpolation [7]; CAIN [8] adapts channel attention into VFI; and EDSC [6] uses ConvLSTM to learn motion offset for implicit motion compensation.…”
Section: Video Frame Interpolation (Vfi)mentioning
confidence: 99%