2021
DOI: 10.1016/j.patrec.2020.11.012
|View full text |Cite
|
Sign up to set email alerts
|

Learn to cycle: Time-consistent feature discovery for action recognition

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 5 publications
1
14
0
Order By: Relevance
“…For MTNet M (CR), we note a performance increase of +1.3% in the top-1 and +0.8% in the top-5 compared to the original network without CR. These accuracy rates are similar to both of the larger SRTG r3d-101 [243] and Channel-Separated Networks (ip-CSN-101) [260]. This increase in performance does not come at a cost in terms of the computational inference or the number of parameters with the additional number of parameters accounting for ∼ 10% of the total number of network parameters.…”
Section: Results On Hacssupporting
confidence: 61%
See 3 more Smart Citations
“…For MTNet M (CR), we note a performance increase of +1.3% in the top-1 and +0.8% in the top-5 compared to the original network without CR. These accuracy rates are similar to both of the larger SRTG r3d-101 [243] and Channel-Separated Networks (ip-CSN-101) [260]. This increase in performance does not come at a cost in terms of the computational inference or the number of parameters with the additional number of parameters accounting for ∼ 10% of the total number of network parameters.…”
Section: Results On Hacssupporting
confidence: 61%
“…Notably, it also comes at a negligible computational overhead when added over MTNet (< +1%). More specifically, considering the overall efficiency of the MTNets in addition to the proposed methods, our MTNet S with CR can achieve accuracy rates similar to those of ResNet-101 with 3D or ResNet-50 with (2+1)D convolutions [261], as well as TAM [62] and SRTG r3d-50 [243]. However, the GFLOP requirements are very low, similar to the original MTNet S with only an additional > 0.1 GFLOPs that account for less than 10% of the total 5.8 GFLOPs.…”
Section: Results On Hacsmentioning
confidence: 99%
See 2 more Smart Citations
“…Others have worked on the minimization of the computational requirements by shifting activations along time [25]. Previous works that have touched upon regularization were aimed more at the overall temporal consistency of features (e.g., [26]) without the consideration of feature combinations that are most informative for some classes.…”
Section: Related Workmentioning
confidence: 99%