2017
DOI: 10.1016/j.robot.2017.04.007
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal sensor-based whole-body control for human–robot collaboration in industrial settings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 72 publications
(37 citation statements)
references
References 11 publications
0
36
0
1
Order By: Relevance
“…The vision/force integration is also explored in the context of collaborative screw fastening [40], where the data from Kinect, black/white camera and force sensor, deployed to track human hand, screw and contact force, respectively, are used alternately for robot control. De Gea Fernández et al [62] extended sensor data integration from IMU, RGB-D (red, green, blue, depth) camera and laser scanner to robot whole-body control. The RGB-D and laser scanner are responsible for human tracking while the IMU, integrated into the operator's clothes, recognises the human intention through gestures.…”
Section: Human-robot Collaborative Assemblymentioning
confidence: 99%
“…The vision/force integration is also explored in the context of collaborative screw fastening [40], where the data from Kinect, black/white camera and force sensor, deployed to track human hand, screw and contact force, respectively, are used alternately for robot control. De Gea Fernández et al [62] extended sensor data integration from IMU, RGB-D (red, green, blue, depth) camera and laser scanner to robot whole-body control. The RGB-D and laser scanner are responsible for human tracking while the IMU, integrated into the operator's clothes, recognises the human intention through gestures.…”
Section: Human-robot Collaborative Assemblymentioning
confidence: 99%
“…In traditional robot applications a violation of these borders just results in an alarm and a shutdown of the production line. In more advanced applications this information is used to adapt the behavior of the robot in case the boundaries mentioned above are violated by an object or a human [de Gea Fernández et al 2017].…”
Section: Glossary (Continued)mentioning
confidence: 99%
“…Safety boundaries are not static in modern robotics, but can be adaptive and vary with the context of the robot's task and application [Vogel et al 2013], e.g. the robot would not go to a full stop but rather slow to a predefined speed in a human-robot cooperation scenario [de Gea Fernández et al 2017]. Because the kind of adaptation of the robot is dependent on the context of the task, applying this adaptive type of safety level can also be seen as a context or application-specific safety level (see also Haddadin [2015]).…”
Section: Glossary (Continued)mentioning
confidence: 99%
See 2 more Smart Citations