2022
DOI: 10.3390/electronics11091499
|View full text |Cite
|
Sign up to set email alerts
|

Video Super-Resolution Using Multi-Scale and Non-Local Feature Fusion

Abstract: Video super-resolution can generate corresponding to high-resolution video frames from a plurality of low-resolution video frames which have rich details and temporally consistency. Most current methods use two-level structure to reconstruct video frames by combining optical flow network and super-resolution network, but this process does not deeply mine the effective information contained in video frames. Therefore, we propose a video super-resolution method that combines non-local features and multi-scale fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 56 publications
0
2
0
Order By: Relevance
“…14 We adapt the structure of the VSR-DUF neural network to perform video super-resolution and denoising of SPAD dToF data. 12 This network comprises blocks of 3D convoluitions that extract spatio-temporal features without the need of extra steps (e.g. frame realignment).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…14 We adapt the structure of the VSR-DUF neural network to perform video super-resolution and denoising of SPAD dToF data. 12 This network comprises blocks of 3D convoluitions that extract spatio-temporal features without the need of extra steps (e.g. frame realignment).…”
Section: Methodsmentioning
confidence: 99%
“…11 Other techniques without alignment feature 3D convolution or recurrent neural networks to exploit spatio-temporal information directly. 12 Video super-resolution schemes have been designed for RGB data and they have not yet been optimised for the usage of depth frames. Some research has been conducted using depth, but these assume a static scene with a moving camera.…”
Section: Introductionmentioning
confidence: 99%