2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00663
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Extract a Video Sequence from a Single Motion-Blurred Image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
231
1
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(233 citation statements)
references
References 18 publications
0
231
1
1
Order By: Relevance
“…Autocorrelation techniques require a relatively large neighborhood to estimate blur parameters and such methods are not suitable for small moving objects. More recently, deep learning has been applied to motion deblurring of videos [37,31] and to the generation of intermediate short-exposure frames [14]. The proposed convolutional neural networks are trained only on small blurs; blur parameters are not available as they are not directly estimated.…”
Section: Related Workmentioning
confidence: 99%
“…Autocorrelation techniques require a relatively large neighborhood to estimate blur parameters and such methods are not suitable for small moving objects. More recently, deep learning has been applied to motion deblurring of videos [37,31] and to the generation of intermediate short-exposure frames [14]. The proposed convolutional neural networks are trained only on small blurs; blur parameters are not available as they are not directly estimated.…”
Section: Related Workmentioning
confidence: 99%
“…Gong et al [7] learned optical flow field from a single blurry image directly through a fullyconvolutional deep neural network and recovered the clean image from the learned optical flow. Jin et al [15] extracted a video sequence from a single motion-blurred image by introducing loss functions invariant to the temporal order. Li et al [22] used a learned image prior to distinguish whether an image is sharp or not and embedded the learned prior into the MAP framework.…”
Section: Related Workmentioning
confidence: 99%
“…One application we explore is the formation of videos from dramatically motion-blurred images, created by temporally aggregating photons from a scene over an extended period of time. Two recent studies present the deterministic recovery of a video sequence from a single motion-blurred image [18,30]. We propose a general deprojection framework for dimensions including, but not limited to time.…”
Section: Inverting a Motion-blurred Image To Videomentioning
confidence: 99%
“…The digits can occlude one another, and bounce off the edges of the frames. Given a dataset of 64 × 64 × 10-sized video subclips, we generate each projection x by averaging the frames in time, similar to other studies that generate motion-blurred images at a large scale [18,21,27,28]. Despite the simple appearance and dynamics of this dataset, synthesizing digit appearances and capturing the plausible directions of each trajectory is challenging.…”
Section: Temporal Deprojections With Moving Mnistmentioning
confidence: 99%