Sign recognition has evolved from traditional video-based to 3D-based image recognition. Most documents are presented with Kinect-based somatosensory terminals, which are limited by difficulties in precisely describing the motions performed by various palm joints. The linguistic details of sign language (SL), such as position, direction, and movement, therefore have to be manually inputted. Meanwhile, most studies rely on the positions or rotation of virtual agent articulations as experimental data to apply classifying or matching techniques, which employ inefficient algorithms. By fully utilizing the features of Leap Motion, motion trajectory is automatically calculated on a computer. Such features as location, movement, and direction are calculated on the basis of the motion parameters of 22 palm joints. Thus, we propose a decision-tree-based algorithm to recognize 3D gestures. Our experimental results show that 1,203 Chinese SLs were signed, and 1,152 were successfully recognized with the use of the Leap Motion sensor. Thus, the recognition rate reached 95.8%, with a recognition response time of only 5.4 s.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.