2022
DOI: 10.1007/978-3-031-04881-4_32
|View full text |Cite
|
Sign up to set email alerts
|

Visual Event-Based Egocentric Human Action Recognition

Abstract: This paper lies at the intersection of three research areas: human action recognition, egocentric vision, and visual event-based sensors. The main goal is the comparison of egocentric action recognition performance under either of two visual sources: conventional images, or event-based visual data. In this work, the events, as triggered by asynchronous event sensors or their simulation, are spatio-temporally aggregated into event frames (a grid-like representation). This allows to use exactly the same neural m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 30 publications
0
2
0
1
Order By: Relevance
“…Data Open Images juga digunakan pada riset domain lain seperti human action recognition [92], image retrieval [93], hingga visual question answering [94].…”
Section: Google's Open Imagesunclassified
“…Data Open Images juga digunakan pada riset domain lain seperti human action recognition [92], image retrieval [93], hingga visual question answering [94].…”
Section: Google's Open Imagesunclassified
“…Also, event-based sensing has been employed to investigate action recognition on third-person view datasets, reflecting the increasing interest in this sensing paradigm [8]. In a recent study [9], the authors compared the performance of a CNN combined with a LSTM architecture on conventional gray-level frames with corresponding simulated event-based data with respect to human action recognition. Their results show the plausibility of using simulated event-based data to classify four different activities.…”
Section: Related Workmentioning
confidence: 99%
“…Also, event-based sensing has been employed to investigate action recognition on third-person view datasets, reflecting the increasing interest in this sensing paradigm (Huang, 2021). In a recent study (Moreno-Rodríguez et al, 2022), the authors compared the performance of a CNN combined with a LSTM architecture on conventional gray-level frames with corresponding simulated event-based data with respect to human action recognition. Their results show the plausibility of using simulated event-based data to classify four different activities.…”
Section: Related Workmentioning
confidence: 99%