The role of haptic feedback on virtual embodiment is investigated in this paper in a context of active and fine manipulation. In particular, we explore which haptic cue, with varying ecological validity, has more influence on virtual embodiment. We conducted a within-subject experiment with 24 participants and compared self-reported embodiment over a humanoid avatar during a coloring task under three conditions: force feedback, vibrotactile feedback, and no haptic feedback. In the experiment, force feedback was more ecological as it matched reality more closely, while vibrotactile feedback was more symbolic. Taken together, our results show significant superiority of force feedback over no haptic feedback regarding embodiment, and significant superiority of force feedback over the other two conditions regarding subjective performance. Those results suggest that a more ecological feedback is better suited to elicit embodiment during fine manipulation tasks.
Part 7: Gesture-Based User Interface Design and Interaction IIInternational audienceThis paper presents a new design and evaluation of customizable gesture commands on pen-based devices. Our objective is to help users during the definition of gestures by detecting confusion among gestures. We also help the memorization gestures with the guide of a new type of menu “Customizable Gesture Menus”. These menus are associated with an evolving gesture recognition engine that learns incrementally, starting from few data samples. Our research focuses on making user and recognition system learn at the same time, hence the term “cross-learning”. Three experimentations are presented in details in this paper to support these ideas
When designing virtual embodiment studies, one of the key choices is the nature of the experimental factors, either between-subjects or within-subjects. However, it is well known that each design has advantages and disadvantages in terms of statistical power, sample size requirements and confounding factors. This paper reports a withinsubjects experiment with 92 participants comparing self-reported embodiment scores under a visuomotor task with two conditions: synchronous motions and asynchronous motions with a latency of 300 ms. With the gathered data, using a Monte-Carlo method, we created numerous simulations of within-and between-subjects experiments by selecting subsets of the data. In particular, we explored the impact of the number of participants on the replicability of the results from the 92 within-subjects experiment. For the between-subjects simulations, only the first condition for each user was considered to create the simulations. The results showed that while the replicability of the results increased as the number of participants increased for the within-subjects simulations, no matter the number of participants, between-subjects simulations were not able to replicate the initial results. We discuss the potential reasons that could have led to this surprising result and potential methodological practices to mitigate them.
This paper presents a new method to help users defining personalized gesture commands (on pen-based devices) that maximize recognition performance from the classifier. The use of gesture commands give rise to a cross-learning situation where the user has to learn and memorize the command gestures and the classifier has to learn and recognize drawn gestures. The classification task associated with the use of customized gesture commands is complex because the classifier only has very few samples per class to start learning from. We thus need an evolving recognition system that can start from scratch or very few data samples and that will learn incrementally to achieve good performance after some using time. Our objective is to make the user aware of the recognizer difficulties during the definition of commands, by detecting confusion among gesture classes, in order to help him define a gesture set that yield good recognition performance from the beginning. To detect confusing classes we apply confusion reject principles to our evolving recognizer, which is based on a first order fuzzy inference system. A realistic experiment has been made on 55 persons to validate our confusion detection technique, and it shows that our method leads to a significant improvement of the classifier recognition performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.