2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00256
|View full text |Cite
|
Sign up to set email alerts
|

Event-based Video Reconstruction Using Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 70 publications
(39 citation statements)
references
References 32 publications
0
18
0
Order By: Relevance
“…In encoder-decoder layers, the decoder also includes a CA sub-layer after SA, which allows it to attend to memory embeddings (sometimes referred to as context) provided by the encoder. In video, encoder-decoder architectures are exploited mostly for captioning [54], [55], [56], [57], [58], [59] and video generation [60], [61], [62]. These three configurations, however, are not the only possibility.…”
Section: Transformer Trends Adopted For Videomentioning
confidence: 99%
See 4 more Smart Citations
“…In encoder-decoder layers, the decoder also includes a CA sub-layer after SA, which allows it to attend to memory embeddings (sometimes referred to as context) provided by the encoder. In video, encoder-decoder architectures are exploited mostly for captioning [54], [55], [56], [57], [58], [59] and video generation [60], [61], [62]. These three configurations, however, are not the only possibility.…”
Section: Transformer Trends Adopted For Videomentioning
confidence: 99%
“…Alternatively, [126] borrows from NLP and utilizes LSTMs to embed local temporal information into the input. Works using a hybrid ConvLSTM [149] are also found [62], [123]. Finally, in some instances, networks pre-trained to perform an auxiliary task (regarded as experts) are used to pre-process the input and provide specific information that can be leveraged by the Transformer [66], [131].…”
Section: Embeddingmentioning
confidence: 99%
See 3 more Smart Citations