2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
DOI: 10.1109/cvpr52729.2023.00931
|View full text |Cite
|
Sign up to set email alerts
|

Learning Human-to-Robot Handovers from Point Clouds

Sammy Christen,
Wei Yang,
Claudia Pérez-D'Arpino
et al.

Abstract: Vision-based human-to-robot handover is an important and challenging task in human-robot interaction. Recent work has attempted to train robot policies by interacting with dynamic virtual humans in simulated environments, where the policies can later be transferred to the real world. However, a major bottleneck is the reliance on human motion capture data, which is expensive to acquire and difficult to scale to arbitrary objects and human grasping motions. In this paper, we introduce a framework that can gener… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(1 citation statement)
references
References 52 publications
0
1
0
Order By: Relevance
“…Some extensions for extrapolating to such tasks can be explored by using better Generative models [8], [10], [42] or adding explicit constraints for combining reactive motion generation and planning [43]. Our future work would also explore incorporating task-related constraints such as object information for handover grasps [44], [45], force information for enabling natural interactive behaviors [46]- [48], and ergonomic and safety constraints for a more user-friendly interaction [49], [50].…”
Section: Discussionmentioning
confidence: 99%
“…Some extensions for extrapolating to such tasks can be explored by using better Generative models [8], [10], [42] or adding explicit constraints for combining reactive motion generation and planning [43]. Our future work would also explore incorporating task-related constraints such as object information for handover grasps [44], [45], force information for enabling natural interactive behaviors [46]- [48], and ergonomic and safety constraints for a more user-friendly interaction [49], [50].…”
Section: Discussionmentioning
confidence: 99%