2019
DOI: 10.1007/978-3-030-34995-0_51
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Gestural Control of Robot Manipulator Through Deep Learning Human-Pose Inference

Abstract: With the raise of collaborative robots, human-robot interaction needs to be as natural as possible. In this work, we present a framework for real-time continuous motion control of a real collaborative robot (cobot) from gestures captured by an RGB camera. Through deep learning existing techniques, we obtain human skeletal pose information both in 2D and 3D. We use it to design a controller that makes the robot mirror in real-time the movements of a human arm or hand.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 9 publications
0
2
0
1
Order By: Relevance
“…En la literatura existen pocos trabajos respecto a la adquisición de datos humanos a través de sensores como cámaras para manipuladores no antropomórficos. En (Martin and Moutarde, 2019), se realiza un primer enfoque para resolver este problema, donde se utiliza una cámara RGB-D y OpenPose para el control de un brazo UR3 aplicando un proceso de CI. Otros trabajos como (Gao et al, 2019) presentan una idea similar utilizando BodyPoseNet para la extracción de características del cuerpo para manipulación paralela dual, teniendo en cuenta la posición del brazo contrario.…”
Section: Estado Del Arteunclassified
“…En la literatura existen pocos trabajos respecto a la adquisición de datos humanos a través de sensores como cámaras para manipuladores no antropomórficos. En (Martin and Moutarde, 2019), se realiza un primer enfoque para resolver este problema, donde se utiliza una cámara RGB-D y OpenPose para el control de un brazo UR3 aplicando un proceso de CI. Otros trabajos como (Gao et al, 2019) presentan una idea similar utilizando BodyPoseNet para la extracción de características del cuerpo para manipulación paralela dual, teniendo en cuenta la posición del brazo contrario.…”
Section: Estado Del Arteunclassified
“…The research studies in [24], [25] faced issues and took compromises in teleoperating a robot relying on an OpenPose detection system. To overcome jerks and jumps of the robotic counterpart, the authors chose heuristic solutions such as euclidean filters on subjects' links or predetermined operating area of the agent to match the camera Field of View (FoV).…”
Section: ) Human-robot Interaction In Simulationmentioning
confidence: 99%
“…This breakthrough has stimulated the skeletal modality interest since it proved to be sufficient to describe and understand the motion of a given action without any background context. This has made pose-based action recognition preferred over other modalities on a huge amount of real-time scenarios for human action recognition such as human-robot interaction [24], [3], medical rehabilitative applications [25], [8] or pedestrian action prediction [12], [11], [13]. Some commonly used learning architectures for pose-based action recognition include 1D/2D convolutional networks [9], [27], recurrent networks [1], [39], a combination of one of the latter with attention mechanisms [21], [16] or Graph-based models [56], [50].…”
Section: A Pose-based Action Recognitionmentioning
confidence: 99%