2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) 2015
DOI: 10.1109/fg.2015.7163162
|View full text |Cite
|
Sign up to set email alerts
|

A survey on mouth modeling and analysis for Sign Language recognition

Abstract: Abstract-Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Researc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 55 publications
(75 reference statements)
0
11
0
1
Order By: Relevance
“…The points with the ratio in a certain range are considered to be the points in the lip region. The discrimination formula is shown in equation (1), in which R is the red component and G is the green component.…”
Section: ) Pixel Information-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The points with the ratio in a certain range are considered to be the points in the lip region. The discrimination formula is shown in equation (1), in which R is the red component and G is the green component.…”
Section: ) Pixel Information-based Methodsmentioning
confidence: 99%
“…But gesture language has problems such as being difficult to learn and understand, and inadequate expression skills. Therefore, ALR technology can help people with hearing impairment communicate with others better to some extent [1,2]. In noisy environments, the speech signal is easily interfered with by the surrounding noise, resulting in the reduction of recognition rate.…”
Section: Introductionmentioning
confidence: 99%
“…Of course, signers also use the face to express their emotions, so emotional and linguistic non-manual markers can interact in complex ways (De Vos et al, 2009). Antonakos et al (2015) presented an overview of non-manual parameter employment for SLR and conclude that a limited number of works focused on employing non-manual features in SLR. There have been works that focused on combining both manual and non-manual features (Freitas et al, 2017;Liu et al, 2014;Yang and Lee, 2013;Mukushev et al, 2020) or non-manual features only (Kumar et al, 2017).…”
Section: Importance Of Non-manual Featuresmentioning
confidence: 99%
“…Even though the audio is in general much more informative than the video signal, speech perception relies on the visual information to help decoding spoken words as auditory conditions are degraded [3], [6], [7], [8]. Furthermore, for people with hearing impairments, the visual channel is the only source of information to understand spoken words if there is no sign language interpreter [2], [9], [10]. Therefore, visual speech recognition is implicated in our speech perception process and is not only influenced by lip position and movement but it also depends on the speaker's face, as it has been shown that it can also transmit relevant information about the spoken message [4], [5].…”
Section: Introductionmentioning
confidence: 99%