2020
DOI: 10.1016/j.jksuci.2019.09.004
|View full text |Cite
|
Sign up to set email alerts
|

A new hybrid deep learning model for human action recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
82
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 133 publications
(82 citation statements)
references
References 25 publications
0
82
0
Order By: Relevance
“…UCF Sports Actions dataset (%) Physical Sports Movements [27] 86.67 HOIRM feature fusion [28] 88.25 Hybrid deep learning model [29] 89.01 Proposed method 90.91 A comparison of overall results shows that the proposed method achieved a significant improvement with recognition results as high as 89.09% and 88.26% over other methods as shown in Table 5. Table 5.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…UCF Sports Actions dataset (%) Physical Sports Movements [27] 86.67 HOIRM feature fusion [28] 88.25 Hybrid deep learning model [29] 89.01 Proposed method 90.91 A comparison of overall results shows that the proposed method achieved a significant improvement with recognition results as high as 89.09% and 88.26% over other methods as shown in Table 5. Table 5.…”
Section: Methodsmentioning
confidence: 99%
“…Physical Sports Movements [27] 86.67 HOIRM feature fusion [28] 88.25 Hybrid deep learning model [29] 89.01…”
Section: Ucf Sports Actions Dataset (%)mentioning
confidence: 99%
“…Each activity is presented as a short video of 3 s. To properly recognize human activities throughout the frames of each mini video, we used Inception V3 to extract the visual characteristics of a person in each frame. Moreover, to locate the moving person in a video sequence, we used a linear Kalman filter [38].…”
Section: Activity Recognitionmentioning
confidence: 99%
“…Zeng et al, [ 24 ] proposed an approach to automatically extract discriminative features for recognising activity, based on CNN and based on the data from the mobile sensors embedded in smart phones. Jaouedi et al [ 25 ] presented an approach based on the analysis of video content where features are based on all visual characteristic of each frame of a video sequence, using the recurrent neural networks model with the gated recurrent unit.…”
Section: Introductionmentioning
confidence: 99%