2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.370
|View full text |Cite
|
Sign up to set email alerts
|

Large-Scale Multimodal Gesture Recognition Using Heterogeneous Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 57 publications
(63 citation statements)
references
References 27 publications
0
63
0
Order By: Relevance
“…Besides, multi-modal approaches for improved emotion recognition were also discussed in both this work and Ref. 270. An end-to-end architecture incorporating temporal convolutions and bidirectional recurrence was proposed in Ref.…”
Section: Iris Recognitionmentioning
confidence: 99%
“…Besides, multi-modal approaches for improved emotion recognition were also discussed in both this work and Ref. 270. An end-to-end architecture incorporating temporal convolutions and bidirectional recurrence was proposed in Ref.…”
Section: Iris Recognitionmentioning
confidence: 99%
“…The superior performance of our method demonstrates the effectiveness of our proposed method. Table 7 shows the performance of the two-Streams [8], 3D CNN [10], ConvLSTM [19], DI + CNN [19], and the proposed methods on the four categories of actions in the NTU RGB + D action dataset using depth modality alone and cross-subject protocol. As expected, the proposed method outperformed all other methods not only categories C3 and C4 but also the other two categories as well.…”
Section: Feature Fusionmentioning
confidence: 99%
“…Although the proposed method obtains outstanding performance overall, we observe that this method has relatively lower performance in actions such as "touch head", "sneeze/cough," "writing," and "eating a snack." Then a comparison between the proposed method and the popular approaches such as two-Streams [8], 3D CNN [10], ConvLSTM [19], DI+CNN [19] in these actions are made. Based on the comparison result, the proposed method achieves better performance than other approaches in these actions.…”
Section: Feature Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…At the submission time, our approach is in the second place as shown in Table 2. Zhang et al [32] RGBD + Flow 58.65 Wang et al [27] RGBD + Flow 60.81 ResC3D [16] RGBD + Flow 64.40 Table 5: Comparison with state-of-the-art on ChaLearn dataset in validation accuracy.…”
Section: C3dmentioning
confidence: 99%