Proceedings of the on Thematic Workshops of ACM Multimedia 2017 2017
DOI: 10.1145/3126686.3126737
|View full text |Cite
|
Sign up to set email alerts
|

Video Imagination from a Single Image with Transformation Generation

Abstract: In this work, we focus on a challenging task: synthesizing multiple imaginary videos given a single image. Major problems come from high dimensionality of pixel space and the ambiguity of potential motions. To overcome those problems, we propose a new framework that produce imaginary videos by transformation generation. e generated transformations are applied to the original image in a novel volumetric merge network to reconstruct frames in imaginary video. rough sampling di erent latent variables, our method … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(28 citation statements)
references
References 35 publications
0
28
0
Order By: Relevance
“…All aforementioned work focuses on how to directly predict future frames. Different from these work, recently, people propose to predict transformations needed for generating future frames [33] and [4], which further boosts the performance of video prediction.…”
Section: Video Frame Predictionmentioning
confidence: 99%
“…All aforementioned work focuses on how to directly predict future frames. Different from these work, recently, people propose to predict transformations needed for generating future frames [33] and [4], which further boosts the performance of video prediction.…”
Section: Video Frame Predictionmentioning
confidence: 99%
“…As a result, recent research focused on Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) for image prediction, and generation problems. 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 …”
Section: Methodsmentioning
confidence: 99%
“…Due to the ambiguous nature of dynamic frames prediction, stochastic models outperform deterministic models, and loss functions. As a result, recent research focused on Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) for image prediction, and generation problems [32][33][34][35][36][37][38][39][40] .…”
Section: Learning Modelmentioning
confidence: 99%
“…In [4], the authors predict or imagine multiple frames from a single image. They generate affine transformations between each frame, and apply them to the original input image to produce their prediction.…”
Section: Non-human Motion Predictionmentioning
confidence: 99%