The lack of a written representation for American Sign Language (ASL) makes it difficult to do something as commonplace as looking up an unknown word in a dictionary. The majority of printed dictionaries organize ASL signs (represented in drawings or pictures) based on their nearest English translation; so unless one already knows the meaning of a sign, dictionary look-up is not a simple proposition. In this paper we introduce the ASL Lexicon Video Dataset, a large and expanding public dataset containing video sequences of thousands of distinct ASL signs, as well as annotations of those sequences, including start/end frames and class label of every sign. This dataset is being created as part of a project to develop a computer vision system that allows users to look up the meaning of an ASL sign. At the same time, the dataset can be useful for benchmarking a variety of computer vision and machine learning methods designed for learning and/or indexing a large number of visual classes, and especially approaches for analyzing gestures and human communication.
Abstract. A method is presented to help users look up the meaning of an unknown sign from American Sign Language (ASL). The user submits a video of the unknown sign as a query, and the system retrieves the most similar signs from a database of sign videos. The user then reviews the retrieved videos to identify the video displaying the sign of interest. Hands are detected in a semi-automatic way: the system performs some hand detection and tracking, and the user has the option to verify and correct the detected hand locations. Features are extracted based on hand motion and hand appearance. Similarity between signs is measured by combining dynamic time warping (DTW) scores, which are based on hand motion, with a simple similarity measure based on hand appearance. In user-independent experiments, with a system vocabulary of 1,113 signs, the correct sign was included in the top 10 matches for 78% of the test queries.
The distribution of null arguments across languages has been accounted for in terms of two distinct strategies: licensing by agreement and licensing by topic. Lillo-Martin (1986 claims that American Sign Language (ASL) exploits both strategies for licensing null arguments, depending on the morphological characteristics of the verb. Here we show that this is incorrect. Once the nonmanual correlates of agreement features (comparable to the nonmanual expressions of other syntactic features) in ASL are recognized, it becomes apparent that null arguments in this language are systematically licensed by an expression of syntactic agreement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.