2023
DOI: 10.48550/arxiv.2301.10931
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning

Abstract: With the rapid development of wearable cameras, a massive collection of egocentric video for first-person visual perception becomes available. Using egocentric videos to predict first-person activity faces many challenges, including limited field of view (FoV), occlusions, and unstable motions. Observing that sensor data from wearable devices facilitates human activity recognition (HAR), activity recognition using multi-modal data is attracting increasing attention. However, the deficiency of related dataset h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 55 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?