In manufacturing, traditional task pre-programming methods limit the efficiency of human–robot skill transfer. This paper proposes a novel task-learning strategy, enabling robots to learn skills from human demonstrations flexibly and generalize skills under new task situations. Specifically, we establish a markerless vision capture system to acquire continuous human hand movements and develop a threshold-based heuristic segmentation algorithm to segment the complete movements into different movement primitives (MPs) which encode human hand movements with task-oriented models. For movement primitive learning, we adopt a Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to extract the optimal trajectory encapsulating sufficient human features and utilize dynamical movement primitives (DMPs) to learn for trajectory generalization. In addition, we propose an improved visuo-spatial skill learning (VSL) algorithm to learn goal configurations concerning spatial relationships between task-relevant objects. Only one multioperation demonstration is required for learning, and robots can generalize goal configurations under new task situations following the task execution order from demonstration. A series of peg-in-hole experiments demonstrate that the proposed task-learning strategy can obtain exact pick-and-place points and generate smooth human-like trajectories, verifying the effectiveness of the proposed strategy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.