2021
DOI: 10.48550/arxiv.2109.06837
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Object Shell Reconstruction: Camera-centric Object Representation for Robotic Grasping

Abstract: Robots can effectively grasp and manipulate objects using their 3D models. In this paper, we propose a simple shape representation and a reconstruction method that outperforms state-of-the-art methods in terms of geometric metrics and enables grasp generation with high precision and success. Our reconstruction method models the object geometry as a pair of depth images, composing the "shell" of the object. This representation allows using image-to-image residual ConvNet architectures for 3D reconstruction, gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…Recently, visionbased robotic grasping has attracted increasing attention as a central research theme due to the fast advancements of deep learning and computer vision, and has achieved remarkable progress [5], [19]. In general, visual grasping approaches can be categorized into two main streams: object reconstruction based methods and end-to-end methods [19], [20]. In general, 3D reconstruction of free-form objects can enable accurate grasp planning, while end-to-end methods can generate grasp proposals directly from the camera sensor [20].…”
Section: A General Grasping and Visual Graspingmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, visionbased robotic grasping has attracted increasing attention as a central research theme due to the fast advancements of deep learning and computer vision, and has achieved remarkable progress [5], [19]. In general, visual grasping approaches can be categorized into two main streams: object reconstruction based methods and end-to-end methods [19], [20]. In general, 3D reconstruction of free-form objects can enable accurate grasp planning, while end-to-end methods can generate grasp proposals directly from the camera sensor [20].…”
Section: A General Grasping and Visual Graspingmentioning
confidence: 99%
“…In general, visual grasping approaches can be categorized into two main streams: object reconstruction based methods and end-to-end methods [19], [20]. In general, 3D reconstruction of free-form objects can enable accurate grasp planning, while end-to-end methods can generate grasp proposals directly from the camera sensor [20]. By sampling the grasp candidates or generating suitable grasps, end-toend methods demonstrated promising results in generalizing to even unseen and novel objects/backgrounds, and have gained increasing popularity [5].…”
Section: A General Grasping and Visual Graspingmentioning
confidence: 99%
“…Kiatos et al [135] use a variational autoencoder [136] to predict the occluded surface points and associated normals of a partial 3D point cloud. Chavan-Dafle et al [137] predicted the depth image that estimates the 'back' side of an object from a masked depth image. The front and back sides can then be stitched together quickly to form an object mesh.…”
Section: A Shape Approximationmentioning
confidence: 99%