2020
DOI: 10.1109/access.2020.2986246
|View full text |Cite
|
Sign up to set email alerts
|

A Hybrid Network Based on Dense Connection and Weighted Feature Aggregation for Human Activity Recognition

Abstract: Human activity recognition (HAR) using body-worn sensors is an active research area in human-computer interaction and human activity analysis. The traditional methods use hand-crafted features to classify multiple activities, which is both heavily dependent on human domain knowledge and results in shallow feature extraction. Rapid developments in deep learning have caused most researchers to switch to deep learning methods, which extract features from raw data automatically. Most of the existing works on human… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 32 publications
0
10
0
Order By: Relevance
“…At last, a fully-connected layer and a softmax function were utilized for computing the probability of every class. Lv et al [13] introduce a technique to recognize human actions utilizing skeleton data by RGB-D camera, called Kinect device. The HAR was learning in the CV field.…”
Section: Related Workmentioning
confidence: 99%
“…At last, a fully-connected layer and a softmax function were utilized for computing the probability of every class. Lv et al [13] introduce a technique to recognize human actions utilizing skeleton data by RGB-D camera, called Kinect device. The HAR was learning in the CV field.…”
Section: Related Workmentioning
confidence: 99%
“…Our benchmark regroups head-to-head 17 articles (21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35) by sharing their approach concerning the human activity recognition task evaluated on the UniMiB-SHAR dataset. Deep learning was the method of choice in almost every case (21-26, 28-32, 34, 35) to try to achieve state-of-the-art results.…”
Section: Related Work From Benchmarkmentioning
confidence: 99%
“…In Table A.5 ("Appendix 1"), the description of sensor-datasets is illustrated with attributes such as data source, #factors, sensor location, and activity type. It includes wearable sensor-based datasets (Alsheikh et al 2016;Asteriadis and Daras 2017;Zhang et al 2012;Chavarriaga et al 2013;Munoz-Organero 2019;Roggen et al 2010;Qin et al 2019), as well as smart-device sensor-based datasets (Ravi et al 2016;Cui and Xu 2013;Weiss et al 2019;Miu et al 2015;Reiss and Stricker 2012a, b;Lv et al 2020;Gani et al 2019;Stisen et al 2015;Röcker et al 2017;Micucci et al 2017) Apart from datasets mentioned in Table A.5, there are few more datasets worth mentioning such as Kasteren dataset (Kasteren et al 2011;, which is also very popular. (2) Vision-based HAR: Devices for collecting 3D data are CCTV cameras (Koppula and Saxena 2016;Devanne et al 2015;Zhang and Parker 2016;Li et al 2010;Duan et al 2020;Kalfaoglu et al 2020;Gorelick et al 2007;Mahadevan et al 2010), depth cameras (Cippitelli et al 2016;Gaglio et al 2015;Neili Boualia and Essoukri Ben Amara 2021;Ding et al 2016; Cornell Activity Datasets: CAD-60 & CAD-120 2021), and videos from public domains like YouTube and Hollywood movie scenes (Gu et al 2018;Soomro et al 2012;Kuehne et al 2011;Sigurdsson et...…”
Section: A Short Note On Har Datasetsmentioning
confidence: 99%