2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00384
|View full text |Cite
|
Sign up to set email alerts
|

Deep Flow-Guided Video Inpainting

Abstract: Video inpainting, which aims at filling in missing regions of a video, remains challenging due to the difficulty of preserving the precise spatial and temporal coherence of video contents. In this work we propose a novel flow-guided video inpainting approach. Rather than filling in the RGB pixels of each frame directly, we consider video inpainting as a pixel propagation problem. We first synthesize a spatially and temporally coherent optical flow field across video frames using a newly designed Deep Flow Comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
219
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 244 publications
(220 citation statements)
references
References 36 publications
0
219
0
1
Order By: Relevance
“…Object removal video forgery is achieved by using inpainting algorithms [33]- [35]. The following works are proposed to detect object removal video forgery [36]- [44].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Object removal video forgery is achieved by using inpainting algorithms [33]- [35]. The following works are proposed to detect object removal video forgery [36]- [44].…”
Section: Related Workmentioning
confidence: 99%
“…In other words, object-based forgery is performed in the middle of frames, e.g., a walking person is removed before leaving a video scene, so this person is seen for a couple of seconds and suddenly disappeared from the video scene. Hence, we use SYSU-OBJFORG data set to generate realistic object removal forged videos by using two recent inpainting algorithms [33], [35]. Fig.7 shows three examples of object removal forgery from the data set.…”
Section: A Data Setmentioning
confidence: 99%
See 1 more Smart Citation
“…Most existing video inpainting methods build on patch-based synthesis with spatial-temporal matching [16,27,33,44] or explicit motion estimation and tracking [1,6,8,9]. Very recently, deep convolutional networks have been used to directly inpaint holes in videos and achieve promising results [24,41,45], leveraging large external video corpus for training along with specialized recurrent frameworks to model spatial-temporal coherence. Different from their works, we explore the orthogonal direction of learning-based video inpainting by investigating an internal (within-video) learning approach.…”
Section: Related Workmentioning
confidence: 99%
“…etwa Li, Gauci & Groß, 2016) bewertet, siehe Goodfellow et al (2014). 20 Anschauliche Beispiele für maschinelle Mustererkennung und deren produktive Umsetzung sind, neben der beeindruckenden Mustererkennung zur Bild-und Videobearbeitung (Xu et al, 2019) z.B. maschinell generierte Bilder, wie sie von Klingemann n.D., Barrat n.D., Bethge n.D., Valenzuela n.D. gezeigt werden, sowie aktuell die Ergebnisse von BigGAN (vgl.…”
Section: Formannahme Auf Und Aus Gründenunclassified