Radar sensing technologies now offer new opportunities for gesturally interacting with a smart environment by capturing microgestures via a chip that is embedded in a wearable device, such as a smartwatch, a finger or a ring. Such microgestures are issued at a very small distance from the device, regardless of whether they are contact-based, such as on the skin, or contactless. As this category of microgestures remains largely unexplored, this paper reports the results of a gesture elicitation study that was conducted with twenty-five participants who expressed their preferred user-defined gestures for interacting with a radar-based sensor on nineteen referents that represented frequent Internet-of-things tasks. This study clustered the 25 × 19 = 475 initially elicited gestures into four categories of microgestures, namely, micro, motion, combined, and hybrid, and thirty-one classes of distinct gesture types and produced a consensus set of the nineteen most preferred microgestures. In a confirmatory study, twenty new participants selected gestures from this classification for thirty referents that represented tasks of various orders; they reached a high rate of agreement and did not identify any new gestures. This classification of radar-based gestures provides researchers and practitioners with a larger basis for exploring gestural interactions with radar-based sensors, such as for hand gesture recognition.
The expansion of touch-sensitive technologies, ranging from smartwatches to wall screens, triggered a wider use of gesture-based user interfaces and encouraged researchers to invent recognizers that are fast and accurate for end-users while being simple enough for practitioners. Since the pioneering work on two-dimensional (2D) stroke gesture recognition based on feature extraction and classification, numerous approaches and techniques have been introduced to classify uni- and multi-stroke gestures, satisfying various properties of articulation-, rotation-, scale-, and translation-invariance. As the domain abounds in different recognizers, it becomes difficult for the practitioner to choose the right recognizer, depending on the application and for the researcher to understand the state-of-the-art. To address these needs, a targeted literature review identified 16 significant 2D stroke gesture recognizers that were submitted to a descriptive analysis discussing their algorithm, performance, and properties, and a comparative analysis discussing their similarities and differences. Finally, some opportunities for expanding 2D stroke gesture recognition are drawn from these analyses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.