Purpose
This paper aims to present a human-in-the-loop natural teaching paradigm based on scene-motion cross-modal perception, which facilitates the manipulation intelligence and robot teleoperation.
Design/methodology/approach
The proposed natural teaching paradigm is used to telemanipulate a life-size humanoid robot in response to a complicated working scenario. First, a vision sensor is used to project mission scenes onto virtual reality glasses for human-in-the-loop reactions. Second, motion capture system is established to retarget eye-body synergic movements to a skeletal model. Third, real-time data transfer is realized through publish-subscribe messaging mechanism in robot operating system. Next, joint angles are computed through a fast mapping algorithm and sent to a slave controller through a serial port. Finally, visualization terminals render it convenient to make comparisons between two motion systems.
Findings
Experimentation in various industrial mission scenes, such as approaching flanges, shows the numerous advantages brought by natural teaching, including being real-time, high accuracy, repeatability and dexterity.
Originality/value
The proposed paradigm realizes the natural cross-modal combination of perception information and enhances the working capacity and flexibility of industrial robots, paving a new way for effective robot teaching and autonomous learning.