“…A disabling hearing loss typically results in delay or lack of spoken language acquisition. Depending on the situation at home, it often also delays the development of sign language, which is a visual-spatial language that does not quite match the native spoken language ( Lugman and Mahmoud, 2020 ). In all sign languages, the whole body is used, including facial expressions (facial grammar), which is required for depth of understanding ( Borgia et al., 2014 ).…”
The COVID-19 pandemic has interrupted the education of millions of students across the world. The purpose of this study was to investigate the perceptions regarding the technological instruction and accommodations provided to deaf students in online distance learning during the COVID-19 pandemic. This study was qualitative in nature and used anonymous, one-to-one semi-structured interviews. In June 2020, we interviewed a convenience sample of deaf students (n = 15) and their instructors (n = 3) and analysed the responses thematically. Upon achieving theme saturation, the thematic structure analysis was finalised. The results revealed five main themes related to deaf students’ experience with online distance learning during COVID-19. The themes are as follows: course content delivered, technology used, delivery method, assessment tools used, and social interactions. Each theme is discussed and compared with the related literature to scientifically encapsulate its suggested dimensions.
The interviewed students described their experience of using online technology in both negative and positive terms. Instructors also provided their input to express their experiences during that time. Online distance learning was described as a difficult and challenging experience that lacked efficient communication channels and failed to address the needs of the deaf with respect to the communication medium. The typical course delivery methods were described as challenging, and the lack of social interaction was highlighted as a liability. At the same time, participants acknowledged some ancillary benefits of online distance learning especially that it enhanced their technology skills and their competences in adapting to a new environment.
“…A disabling hearing loss typically results in delay or lack of spoken language acquisition. Depending on the situation at home, it often also delays the development of sign language, which is a visual-spatial language that does not quite match the native spoken language ( Lugman and Mahmoud, 2020 ). In all sign languages, the whole body is used, including facial expressions (facial grammar), which is required for depth of understanding ( Borgia et al., 2014 ).…”
The COVID-19 pandemic has interrupted the education of millions of students across the world. The purpose of this study was to investigate the perceptions regarding the technological instruction and accommodations provided to deaf students in online distance learning during the COVID-19 pandemic. This study was qualitative in nature and used anonymous, one-to-one semi-structured interviews. In June 2020, we interviewed a convenience sample of deaf students (n = 15) and their instructors (n = 3) and analysed the responses thematically. Upon achieving theme saturation, the thematic structure analysis was finalised. The results revealed five main themes related to deaf students’ experience with online distance learning during COVID-19. The themes are as follows: course content delivered, technology used, delivery method, assessment tools used, and social interactions. Each theme is discussed and compared with the related literature to scientifically encapsulate its suggested dimensions.
The interviewed students described their experience of using online technology in both negative and positive terms. Instructors also provided their input to express their experiences during that time. Online distance learning was described as a difficult and challenging experience that lacked efficient communication channels and failed to address the needs of the deaf with respect to the communication medium. The typical course delivery methods were described as challenging, and the lack of social interaction was highlighted as a liability. At the same time, participants acknowledged some ancillary benefits of online distance learning especially that it enhanced their technology skills and their competences in adapting to a new environment.
“…This drastic shift was not only motivated by the successful applications of NMT techniques in spoken language MT, but also by the publication of the RWTH-PHOENIX-Weather 2014T dataset and the promising results obtained on that dataset using NMT methods [10]. A single outlier is found in the paper by Luqman and Mahmoud, who use Rule-based Machine Translation (RBMT) in 2020 to translate from Arabic sign language into Arabic [43].…”
Section: Sign Language Mtmentioning
confidence: 99%
“…In total, 17 papers (53%) report on a Gloss2Text model [7,45,63,41,62,58,27,10,38,1,43,80,11,44,51,46,81]. Sign2Gloss2Text models are proposed in 7 papers (22%) [64,23,10,39,80,11,84].…”
Section: Tasksmentioning
confidence: 99%
“…Note that the largest dataset in terms of number of parallel sentences, ASLG-PC12, contains 827 thousand training sentences. For MT between spoken languages, datasets typically contain several millions of sentences, 12 papers (37.5%) use custom datasets that are not publicly available [38,33,34,37,43,62,45,56,41,7,57,44], limiting further analysis of their results as they cannot be compared directly to other papers.…”
Automatic translation from signed to spoken languages is an interdisciplinary research domain, lying on the intersection of computer vision, machine translation and linguistics. Nevertheless, research in this domain is performed mostly by computer scientists in isolation. As the domain is becoming increasingly popular -the majority of scientific papers on the topic of sign language translation have been published in the past three years -we provide an overview of the state of the art as well as some required background in the different related disciplines. We give a high-level introduction to sign language linguistics and machine translation to illustrate the requirements of automatic sign language translation. We present a systematic literature review to illustrate the state of the art in the domain and then, harking back to the requirements, lay out several challenges for future research. We find that significant advances have been made on the shoulders of spoken language machine translation research. However, current approaches are often not linguistically motivated or are not adapted to the different input modality of sign languages. We explore challenges related to the representation of sign language data, the collection of datasets, the need for interdisciplinary research and requirements for moving beyond research, towards applications. Based on our findings, we advocate for interdisciplinary research and to base future research on linguistic analysis of sign languages. Furthermore, the inclusion of deaf and hearing end users of sign language translation applications in use case identification, data collection and evaluation is of the utmost importance in the creation of useful sign language translation models. We recommend iterative, human-in-the-loop, design and development of sign language translation models.
“…This architecture was detailed and an assessment compared to our earlier work and the modern machine translation system. Luqman H. et al [30] suggested that the translation should empower a PC in semantic, syntactic, and textual measurements to form a characteristic language (SST-CL). Moving a word or speech from one language sometimes can be powerfully carried out with MT, and it might have evident and unexpected implications.…”
Section: Literature Survey On Traditional Corpus-driven Translation Techniquesmentioning
Machine translation can be used as a language-based method, where words are translated into the most appropriate language where they will be replaced. Students are willing to automatically learn knowledge interpretation from large data instead of writing rules on human professionals. Although the end-to-end machine translation (MT) process has recently made considerable progress, the problem of low-resource language pairs and areas still suffers from data scarcity. In this paper, an architectural statistics source-based translation machine (ASS-TM) model has been introduced to deal with the data scarcity problem need to translate small body language. The discriminative learning process (DLP) is employed to enlarge the vocabulary of a system and set of syntactic structures by integrating the synonyms and paraphrases obtained in corpus training. The iteration pipelines for the integration and combination of various generation models using an effective decoding framework. Symmetric context-free grammar (SCG)is implemented to extract a translation memory that includes the conceptual relationships between the two component’s structures. The simulation analysis is performed based on accuracy and efficiency, proving the proposed framework’s reliability of 97.3%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.