2024
DOI: 10.1109/tcsvt.2023.3312321
|View full text |Cite
|
Sign up to set email alerts
|

Local-Global Temporal Difference Learning for Satellite Video Super-Resolution

Yi Xiao,
Qiangqiang Yuan,
Kui Jiang
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 60 publications
(7 citation statements)
references
References 67 publications
0
7
0
Order By: Relevance
“…Xiao et al. [33] proposed an efficient and effective temporal compensation method for satellite video SR by exploiting well‐defined temporal differences. They designed a short‐term temporal difference module (S‐TDM), a long‐term temporal difference module (L‐TDM), and a difference compensation unit (DCU) to respectively achieve local information recovery, global compensation, and spatial consistency preservation.…”
Section: Related Workmentioning
confidence: 99%
“…Xiao et al. [33] proposed an efficient and effective temporal compensation method for satellite video SR by exploiting well‐defined temporal differences. They designed a short‐term temporal difference module (S‐TDM), a long‐term temporal difference module (L‐TDM), and a difference compensation unit (DCU) to respectively achieve local information recovery, global compensation, and spatial consistency preservation.…”
Section: Related Workmentioning
confidence: 99%
“…The most common alternatives to BN are: Group Normalization (GN) [14], Layer Normalization (LN) [15], Instance Normalization (IN) [16] and Batch Renormalization (BRN) [17]. While BN is the most employed normalization method in SOTA algorithms, LN is a fundamental part of the recently proposed Transformer architectures [31] that are becoming widely adopted for solving several learning tasks, such as remote sensing [32], [33] and computer vision [34]. To the best of our knowledge, there are no works benchmarking normalization layers for FL on non-IID data.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Yi et al [ 15 ] and Li et al [ 16 ] adopted non-local attention. Xiao et al [ 35 ] exploited the temporal difference attention. Wang et al [ 36 ] and Xiao et al [ 37 ] made use of deformable attention.…”
Section: Related Workmentioning
confidence: 99%