2013 28th International Conference on Image and Vision Computing New Zealand (IVCNZ 2013) 2013
DOI: 10.1109/ivcnz.2013.6727005
|View full text |Cite
|
Sign up to set email alerts
|

Colour segmentation for multiple low dynamic range images using boosted cascaded classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…This section provides an overview of the datasets used for performance evaluation: the ASL dataset [20], the ASL with Digits dataset [21], and the NUS Hand Posture dataset [22].…”
Section: Datasetsmentioning
confidence: 99%
“…This section provides an overview of the datasets used for performance evaluation: the ASL dataset [20], the ASL with Digits dataset [21], and the NUS Hand Posture dataset [22].…”
Section: Datasetsmentioning
confidence: 99%
“…Two-dimensional CNNs have been frequently used for digit and letter signs, which usually do not contain temporal information and consist of static images. Damaneh et al [7] proposed a hybrid method using CNN, Gabor Filter, and ORB feature descriptor for recognizing static hand gestures, achieving accuracies of 99.92%, 99.8%, and 99.80% on the Massey [27], American Sign Language (ASL) alphabet [28], and ASL datasets [29], respectively. Das et al [30] used a hybrid method combining CNN and Random Forest classifiers to accurately predict numerals and characters in Bangla Sign Language with accuracies of 97.33% and 91.67%, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…With their proposed model, they were able to achieve an accuracy of 83.29%. Masood, Thuwal, and Srivastava (2018) used VGG16 model to classify 36 different hand gestures (26 alphabets and 10 numerals) of ASL from the dataset given by Barczak, Reyes, Abastillas, Piccio, and Susnjak (2011). They initialized the parameters of their model by transferring weights from the VGG16 network pretrained on Ima-geNet dataset which consists of more than a million images of 1000 categories.…”
Section: American Sign Language (Asl)mentioning
confidence: 99%