Facial expressions in sign languages are used to express grammatical functions, such as question marking, but can also be used to express emotions (either the signer's own or in constructed action contexts). Emotions and grammatical functions can utilize the same articulators, and the combinations can be congruent or incongruent. For instance, surprise and polar questions can be marked by raised eyebrows, while anger is usually marked by lowered eyebrows. We investigated what happens when different emotions (neutral/surprise/anger) are combined with different sentence types (statement/polar question/whquestion) in Kazakh-Russian Sign Language (KRSL), replicating studies previously made for other sign languages. We asked 9 native signers (5 deaf, 4 hearing children of deaf adults) to sign 10 simple sentences in 9 conditions (3 emotions * 3 sentence types). We used OpenPose software to track eyebrow position in the video recordings. We found that emotions and sentence types influence eyebrow position in KRSL: eyebrows are raised for polar questions and surprise, and lowered for anger. There are also some interactions between the two factors, as well as some differences between hearing and deaf native signers, namely a smaller effect of polar questions for the deaf group, and a different interaction between emotions and wh-question marking in the two groups. We thus find evidence for the complex influences on non-manual behavior in signers of sign languages, and showcase a quantitative approach to this field.
This paper presents a new large-scale signer independent dataset for Kazakh-Russian Sign Language (KRSL) for the purposes of Sign Language Processing. We envision it to serve as a new benchmark dataset for performance evaluations of Continuous Sign Language Recognition (CSLR) and Translation (CSLT) tasks. The proposed FluentSigners-50 dataset consists of 173 sentences performed by 50 KRSL signers resulting in 43,250 video samples. Dataset contributors recorded videos in real-life settings on a wide variety of backgrounds using various devices such as smartphones and web cameras. Therefore, distance to the camera, camera angles and aspect ratio, video quality, and frame rates varied for each dataset contributor. Additionally, the proposed dataset contains a high degree of linguistic and inter-signer variability and thus is a better training set for recognizing a real-life sign language. FluentSigners-50 baseline is established using two state-of-the-art methods, Stochastic CSLR and TSPNet. To this end, we carefully prepared three benchmark train-test splits for models’ evaluations in terms of: signer independence, age independence, and unseen sentences. FluentSigners-50 is publicly available at https://krslproject.github.io/FluentSigners-50/
The paper presents the first dataset that aims to serve interdisciplinary purposes for the utility of computer vision community and sign language linguistics. To date, a majority of Sign Language Recognition (SLR) approaches focus on recognising sign language as a manual gesture recognition problem. However, signers use other articulators: facial expressions, head and body position and movement to convey linguistic information. Given the important role of non-manual markers, this paper proposes a dataset and presents a use case to stress the importance of including non-manual features to improve the recognition accuracy of signs. To the best of our knowledge no prior publicly available dataset exists that explicitly focuses on non-manual components responsible for the grammar of sign languages. To this end, the proposed dataset contains 28250 videos of signs of high resolution and quality, with annotation of manual and nonmanual components. We conducted a series of evaluations in order to investigate whether non-manual components would improve signs' recognition accuracy. We release the dataset to encourage SLR researchers and help advance current progress in this area toward realtime sign language interpretation. Our dataset will be made publicly available at https:// krslproject.github.io/krsl-corpus
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.