2015 International Electronics Symposium (IES) 2015
DOI: 10.1109/elecsym.2015.7380825
|View full text |Cite
|
Sign up to set email alerts
|

Sign language learning based on Android for deaf and speech impaired people

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…In [5] Object detection is achieved using Viola-Jones Algorithm, which is an algorithm that detects objects that are in the form of a hand [6]. The Viola-Jones algorithm detects the object using feature value which is faster than the detection based on the value per pixel of an image.…”
Section: Related Workmentioning
confidence: 99%
“…In [5] Object detection is achieved using Viola-Jones Algorithm, which is an algorithm that detects objects that are in the form of a hand [6]. The Viola-Jones algorithm detects the object using feature value which is faster than the detection based on the value per pixel of an image.…”
Section: Related Workmentioning
confidence: 99%
“…Research in the area of systems that are capable of recognising sign language has received substantial attention over the past few decades, fuelled in particular by the rapid evolution of artificial intelligence techniques [1][2][3]. In turn this has led to the development of many Sign Language Recognition Systems (SLR), which shall be referred to as SLR systems throughout the remainder of this chapter.…”
Section: Introductionmentioning
confidence: 99%
“…These systems though varying in sign language dialect, share the common goal of correctly recognising hand gestures performed by a signer. However the varying proposed approaches to achieving this goal has produced a diverse area of research and development encompassing areas of computer science such as Computer Vision (CV), Sensor Processing, Human-Computer Interaction, and Pattern Recognition [1][2][3][4]. Of these SLR systems there are two main types of design and implementation, which are those that use wearable sensors, and those that use video footage and images.…”
Section: Introductionmentioning
confidence: 99%
“…The main purpose to present this work is to facilitate hearing-impaired persons [6] and reduce the communication gap between hearing-impaired and normal individuals of the society [7]. According to published work, it was recorded that minimum duration to identify a gestures is 6-20 fps [8].…”
Section: Introductionmentioning
confidence: 99%