2021
DOI: 10.48550/arxiv.2109.00829
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SlowFast Rolling-Unrolling LSTMs for Action Anticipation in Egocentric Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…In this case the scaled dot-product in the self-attention computes the pairwise similarity of all vertices from the inputs, which can be viewed as an implicit edge estimation. This can be extended by optionally providing the edge estimation explicitly, in the form of an adjacency matrix A = {a vw ; ∀w ∈ N (v), v ∈ G}, and using it during the attention computation as described in (11) and (12). We refer to these two cases as implicit and explicit edge learning.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…In this case the scaled dot-product in the self-attention computes the pairwise similarity of all vertices from the inputs, which can be viewed as an implicit edge estimation. This can be extended by optionally providing the edge estimation explicitly, in the form of an adjacency matrix A = {a vw ; ∀w ∈ N (v), v ∈ G}, and using it during the attention computation as described in (11) and (12). We refer to these two cases as implicit and explicit edge learning.…”
Section: Methodsmentioning
confidence: 99%
“…RU-LSTM [19] deploys two LSTMs and behaves as an encoder-decoder, where the first progressively summarizes the observed together with the second that unrolls over future predictions without observing. The unrolling design can also be found in [11], [20], but with the rolling part replaced by SlowFast [21] and Higher-Order Recurrent Transformer, respectively. [22] aggregates the multiple predictions by pooling over different granularity of temporal segments to improve the anticipation accuracy.…”
Section: Related Work a Video Action Anticipationmentioning
confidence: 99%
See 2 more Smart Citations