Joint object transfer is a complex task, which is less structured and less specific than what is existing in several industrial settings. When two humans are involved in such a task, they cooperate through different modalities to understand the interaction states during operation and mutually adapt to one another's actions. Mutual adaptation implies that both partners can identify how well they collaborate (i.e. infer about the interaction state) and act accordingly. These interaction states can define whether the partners work in harmony, face conflicts, or remain passive during interaction. Understanding how two humans work together during physical interactions is important when exploring the ways a robotic assistant should operate under similar settings. This study acts as a first step to implement an automatic classification mechanism during ongoing collaboration to identify the interaction state during object comanipulation. The classification is done on a dataset consisting of data from 40 subjects, who are partnered to form 20 dyads. The dyads experiment in a physical human-human interaction (pHHI) scenario to move an object in an haptics-enabled virtual environment to reach predefined goal configurations. In this study, we propose a sliding-window approach for feature extraction and demonstrate the online classification methodology to identify interaction patterns. We evaluate our approach using 1) a support vector machine classifier (SVMc) and 2) a Gaussian Process classifier (GPc) for multi-class classification, and achieve over 80% accuracy with both classifiers when identifying general interaction types.
In this paper, a grounding framework is proposed that combines unsupervised and supervised grounding by extending an unsupervised grounding model with a mechanism to learn from explicit human teaching. To investigate whether explicit teaching improves the sample efficiency of the original model, both models are evaluated through an interaction experiment between a human tutor and a robot in which synonymous shape, color, and action words are grounded through geometric object characteristics, color histograms, and kinematic joint features. The results show that explicit teaching improves the sample efficiency of the unsupervised baseline model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.