The Mexican Sign Language (MSL) is a language with its own syntax and lexicon. It is used by the deaf people, who use it to express thoughts, ideas and emotions. However, most of hearing people are unable to understand this language. The alphabet of any Sign Language (SL) is composed of signs where each sign corresponds to a letter of the alphabet of the dominant language in the region, for example, Spanish or English. Most signs of a signed alphabet are static, that means, they are only composed by the configuration of the hands. However, there are letters that are represented by signs that include movement. The present work proposes a system that, using artificial vision techniques and image processing, identify the 27 letters -including dynamic and static signs-of the Spanish alphabet in a mobile application. To solve the problem of sign identification it was used a combination of image processing techniques and deep learning. Canny and Camshift algortihms was implemented for the recognition of edges and trajectories in signs with movement. Once the characteristics were identified, the K-means and Tensorflow algorithms were used to classify the signs. The system achieves a 92% accuracy in the alphabet sign detection.
Resumen. La traducción automática se enfrenta a complejidades computacionales y lingüísticas; así como a los principios que rigen tanto la lengua origen como la lengua meta. Este proceso se complica aún más alguna de las lenguas involucradas no son escritas como es el caso de la Lengua de Señas Mexicana (LSM), la cual es desarrollada por personas con discapacidad auditiva. El presente trabajo se centra en el desarrollo de una herramienta para la traducción directa con reglas marcadas del español escrito a Lengua de Señas Mexicano (LSM). Para ello se hace uso de bases de datos multimedia por ser una lengua viso-gestual y de PNL para traducción automática con análisis léxico, sintáctico y morfológico.Palabras clave: Traducción automática, sordera, procesamiento de lenguaje natural, lingüística computacional, bases de datos multimedia, LSM.
Abstract. Automatic translation faces computational as well as linguisticcomplexities because of the principles and standards of the original and the target language. This process is even bigger challenge when the languages are not written, as it is the case of Mexican Sign Language (LSM), which is used by people with hearing disabilities. This paper focuses on the development of a tool for direct translation with rules; the translation is from written Spanish to Mexican Sign Language. This tool uses multimedia databases because of the visual-gestual language involved, and it uses NLP for the automatic translation with lexical, syntactic and morphological analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.