2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI) 2018
DOI: 10.1109/icoei.2018.8553719
|View full text |Cite
|
Sign up to set email alerts
|

Video Super Resolution with Generative Adversarial Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 8 publications
0
3
0
Order By: Relevance
“…The spatial alignment modules and the temporal adaptation are manifested to increase the reconstruction quality, and thus considerably of the SR performance is improved. Wang et al [33] suggested depth SR on RGB-D video streams with significant displacement 3D motion. This method is improved in two phases: merging of compensated depth images and motion compensation of depth images.…”
Section: Related Workmentioning
confidence: 99%
“…The spatial alignment modules and the temporal adaptation are manifested to increase the reconstruction quality, and thus considerably of the SR performance is improved. Wang et al [33] suggested depth SR on RGB-D video streams with significant displacement 3D motion. This method is improved in two phases: merging of compensated depth images and motion compensation of depth images.…”
Section: Related Workmentioning
confidence: 99%
“…In computer vision, super resolution (SR) refers to a computational technique that reconstructs a higher resolution image from low-resolution image. Image and video super resolution studies are found in [45][46][47]. In super-resolution, images generally lose their finer texture details when they are super resolved at large upscaling factors.…”
Section: Related Study: Denoising Learningmentioning
confidence: 99%
“…The losses can be classed into adversarial loss, the pixel-based loss and feature map based loss. For example, Ledig et al use an adversarial loss and a content loss[214],while Chen et al use an MSE loss, the generative loss and the VGG loss[212].Other loss functions contain the sum of the perceptual loss, MSE-based content loss, and an adversarial loss, which is used[215] by Gopan and Kumar, the sum of the pixel-wise loss and adversarial loss used in[216] by Jiang et al and the sum of joint sparsifying transform loss and supervision loss in [213] by You et al.…”
mentioning
confidence: 99%