2019
DOI: 10.1007/978-3-030-37731-1_43
|View full text |Cite
|
Sign up to set email alerts
|

The Korean Sign Language Dataset for Action Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 15 publications
0
14
0
Order By: Relevance
“…The NPU RGB+D dataset (Yang et al, 2019 ) is a unique multi-modal dataset that combines RGB (color) and depth information for sports action analysis across various sports, including basketball and football. Data preparation steps encompassed the synchronization of RGB videos with corresponding depth maps, ensuring temporal alignment.…”
Section: Methodsmentioning
confidence: 99%
“…The NPU RGB+D dataset (Yang et al, 2019 ) is a unique multi-modal dataset that combines RGB (color) and depth information for sports action analysis across various sports, including basketball and football. Data preparation steps encompassed the synchronization of RGB videos with corresponding depth maps, ensuring temporal alignment.…”
Section: Methodsmentioning
confidence: 99%
“…Guo et al introduced a transformer model, CNN meets Transformer (CMT), by incorporating self-attention with CNN layers to efficiently extract multiscale features [20]. Shin et al further optimized CMT and reported 89.00% and accuracy for KSL-77 and for KSL-20 respectively [21], [22].…”
Section: Related Workmentioning
confidence: 99%
“…KSL is among the most widely used languages globally, and the KSL-77 and KSL 20 datasets are utilized in the study for evaluation [21], [22]. The KSL-77 dataset, which was collected from 20 individuals and includes 1,229 videos, from which 112,564 frames were extracted at a rate of 30 frames per second [22].…”
Section: A Ksl Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…This also requires effective feature extraction and classification algorithms for successful operation. To address this issue, some researchers employed a vision-based Korean Sign Language word recognition system using ANN [7], CNN [3], [4], Transformer [7], and Graph Convolutional Network (GCN) [8]. However, all existing vision-based KSL systems are designed exclusively for sign word recognition, and no research work has been found for KSL alphabet recognition.…”
Section: Introductionmentioning
confidence: 99%