This paper presents a study that allows users to define intuitive gestures to navigate a humanoid robot. For eleven navigational commands, 385 gestures, performed by 35 participants, were analyzed. The results of the study reveal user-defined gesture sets for both novice users and expert users. In addition, we present, a taxonomy of the userdefined gesture sets, agreement scores for the gesture sets, time performances of the gesture motions, and present implications to the design of the robot control, with a focus on recognition and user interfaces.
This paper presents a framework that allows users to interact with and navigate a humanoid robot using body gestures. The first part of the paper describes a study to define intuitive gestures for eleven navigational commands based on analyzing 385 gestures performed by 35 participants. From the study results, we present a taxonomy of the user-defined gesture sets, agreement scores for the gesture sets, and time performances of the gesture motions. The second part of the paper presents a full body interaction system for recognizing the user-defined gestures. We evaluate the system by recruiting 22 participants to test for the accuracy of the proposed system. The results show that most of the defined gestures can be successfully recognized with a precision between 86−100 % and an accuracy between 73−96 %. We discuss the limitations of the system and present future work improvements.Markerless body tracking technologies based on depth sensors allowed researchers to have an easy-to-use platform for developing algorithms for recognizing full body gestures and postures in real time [1,2]. Recently, researchers are increas-M. Obaid (B) t2i Lab,
Articles
AI MAGAZINE Modeling is regarded as fundamental to human cognition and scientific inquiry (Schwarz and White 2005). It helps learners express and externalize their thinking, visualize and test components of their theories, and make materials more interesting. Particularly, the importance of learners constructing conceptual interpretations of system behavior has been pointed out many times (Mettes and Roossink [1981], Elio and Sharf [1990], Ploetzner and Spada [1998], Frederiksen andWhite [2002]). Modeling environments can thus make a significant contribution to the improvement of science education.A new class of knowledge construction tools is emerging that uses logic-based (symbolic, nonnumeric) representations for expressing conceptual systems knowledge (
We present the design of a cast of pedagogical agents impersonating different educational roles in an interactive virtual learning environment. Teams of those agents are used to create different learning scenarios in order to provide learners with an engaging and motivating learning experience. Authors can employ an easy to use multimodal dialog authoring tool to adapt lecture and dialog content as well as interaction management to meet their respective requirements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.