2021
DOI: 10.3390/electronics10050534
|View full text |Cite
|
Sign up to set email alerts
|

Two Stage Continuous Gesture Recognition Based on Deep Learning

Abstract: The paper proposes an effective continuous gesture recognition method, which includes two modules: segmentation and recognition. In the segmentation module, the video frames are divided into gesture frames and transitional frames by using the information of hand motion and appearance, and continuous gesture sequences are segmented into isolated sequences. In the recognition module, our method exploits the spatiotemporal information embedded in RGB and depth sequences. For the RGB modality, our method adopts Co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 53 publications
0
13
1
Order By: Relevance
“…Then, to increase performance, an additional convolutional layer and an increased sample size were considered. The first two tests were evaluated over mini-batches of 13 epochs, following the segmentation classifier proposed by Wang [28]. The last two tests were evaluated over a batch size of 64 epochs, a training batch size also presented in Wang's investigation [28].…”
Section: Postprocessingmentioning
confidence: 99%
See 4 more Smart Citations
“…Then, to increase performance, an additional convolutional layer and an increased sample size were considered. The first two tests were evaluated over mini-batches of 13 epochs, following the segmentation classifier proposed by Wang [28]. The last two tests were evaluated over a batch size of 64 epochs, a training batch size also presented in Wang's investigation [28].…”
Section: Postprocessingmentioning
confidence: 99%
“…The first two tests were evaluated over mini-batches of 13 epochs, following the segmentation classifier proposed by Wang [28]. The last two tests were evaluated over a batch size of 64 epochs, a training batch size also presented in Wang's investigation [28]. A 12GB NVIDIA Tesla K80 graphics processing unit provided by Google Colaboratory was used for training of the 20BN Jester dataset for the baseline model, TensorFlow [35] was used to deploy the model, and the training took approximately nine and a half hours.…”
Section: Postprocessingmentioning
confidence: 99%
See 3 more Smart Citations