2019 IEEE International Symposium on Circuits and Systems (ISCAS) 2019
DOI: 10.1109/iscas.2019.8702581
|View full text |Cite
|
Sign up to set email alerts
|

N-HAR: A Neuromorphic Event-Based Human Activity Recognition System using Memory Surfaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Due to the relatively new vision-sensing technology, only a few event-based datasets that are captured with an event camera are available. Among these, human pose estimation [2] dataset, action recognition [16]- [18] dataset, face expression recognition [19] dataset, and car recognition [20], [21] datasets. To address the limited availability of event data, researchers have alternatively suggested the generation of semi-synthetic and synthetic event-based datasets.…”
Section: A Event Camera Datasetsmentioning
confidence: 99%
“…Due to the relatively new vision-sensing technology, only a few event-based datasets that are captured with an event camera are available. Among these, human pose estimation [2] dataset, action recognition [16]- [18] dataset, face expression recognition [19] dataset, and car recognition [20], [21] datasets. To address the limited availability of event data, researchers have alternatively suggested the generation of semi-synthetic and synthetic event-based datasets.…”
Section: A Event Camera Datasetsmentioning
confidence: 99%
“…Regarding the real-world event-based action recognition dataset, PAF [17] is the first one, which offers 450 recordings spanning 10 categories from an indoor office setting, each with an average length of 5s and a spatial resolution of 346 × 260. N-HAR [19] is another indoor dataset with 3,091 videos, but it is category-unbalanced and contains only 5 actions. DailyAction [14] provides 1,440 recordings across 12 action categories, albeit with a limited spatial resolution of 128 × 128 due to acquisition via DVS128 [77].…”
Section: Datasets For Action Recognitionmentioning
confidence: 99%
“…One of the reasons is the lack of datasets. Although there are existing single-view event action datasets [16], [17], [18], [19], there is a lack of comprehensive multiview event datasets specifically designed for action recognition. DHP19 [18] is the only dataset that can be used for multi-view event-based action recognition, but it is oriented towards pose estimation tasks and is small in scale (33 actions and 2,228 recordings).…”
Section: Introductionmentioning
confidence: 99%
“…In the past decade, several new kinds of vision sensors have been invented to overcome these disadvantages, and one of them is neuromorphic vision sensors (NVS), which delivers fascinating features such as high temporal resolution, broad dynamic range, and low energy consumption. The characteristics of NVS have attracted increasing attentions from both academia and industry, yielding promising achievements in activity recognition [ 1 ], aided driving [ 2 ], localization [ 3 ], and anomaly detection [ 4 ].…”
Section: Introductionmentioning
confidence: 99%