2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.361
|View full text |Cite
|
Sign up to set email alerts
|

Continuous Gesture Recognition with Hand-Oriented Spatiotemporal Feature

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
61
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 62 publications
(62 citation statements)
references
References 16 publications
1
61
0
Order By: Relevance
“…But this approach was hard to have universal applicability, because some hand motion information could not be converted into static image. In [27], it presented a spotting-recognition framework for large-scale continuous gesture recognition. Camgoz et al [28] used the sequence-tosequence to learn the recognition problems of sign language.…”
Section: B Methods Based On Video Sequencementioning
confidence: 99%
“…But this approach was hard to have universal applicability, because some hand motion information could not be converted into static image. In [27], it presented a spotting-recognition framework for large-scale continuous gesture recognition. Camgoz et al [28] used the sequence-tosequence to learn the recognition problems of sign language.…”
Section: B Methods Based On Video Sequencementioning
confidence: 99%
“…The multiple cues of sign language can be separated into categories of multi-modality and multi-semantic. Early works about multi-modality utilize physical sensors to collect the 3D space information, such as depth and infrared maps (Molchanov et al 2016;Liu et al 2017). With the development of flow estimation, Cui et al (Cui, Liu, and Zhang 2019) explore the multi-modality fusion of RGB and optical flow and achieve state-of-the-art performance on PHOENIX-2014 database.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the basic status of convolutional neural networks (CNN) in deep learning networks, some research teams have conducted a series of CNN-based isolated sign language recognition studies since 2013 [ 6 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ]. Based on CNN recognition, the algorithm can be optimized by adding multi-modal data (including depth, skeleton, key points of the human body, etc.…”
Section: Related Workmentioning
confidence: 99%