“…In the mobile and ubiquitous computing community, there have been efforts to exploit sensing platforms for sign language translation [14,34,37,39,44,57,60]. These previous works use devices such as RGB cameras [5,21,27,29,33,35,46,54,55], motion sensors (e.g., Leap Motion) [14,41], depth cameras/sensors (e.g., Kinect) [6,10,11,16,38,48,51], or electromyogram (EMG) sensors [53,57] to capture user hand motions and combine sensing results with various machine learning models to infer the word being expressed.…”