2020
DOI: 10.3389/frobt.2020.00120
|View full text |Cite
|
Sign up to set email alerts
|

DGCM-Net: Dense Geometrical Correspondence Matching Network for Incremental Experience-Based Robotic Grasping

Abstract: This article presents a method for grasping novel objects by learning from experience. Successful attempts are remembered and then used to guide future grasps such that more reliable grasping is achieved over time. To transfer the learned experience to unseen objects, we introduce the dense geometric correspondence matching network (DGCM-Net). This applies metric learning to encode objects with similar geometry nearby in feature space. Retrieving relevant experience for an unseen object is thus a nearest neigh… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(18 citation statements)
references
References 59 publications
0
15
0
Order By: Relevance
“…Compared to the original NOCS representation [11], which recovers a 7D pose of novel object instances, the proposed NUNOCS allows to scale independently in each dimension when converting to the canonical space. Therefore, more fine-grained dense correspondence across object instances can be established via measuring their similarity (𝐿 2 distance in our case) in C. This is especially the case for instances with dramatically different 3D scales, as shown in the wrapped figure, where colors indicate correspondence similarity in C. A key difference from another related work on VD-NOC [44], which directly normalizes the scanned point cloud in the camera frame, is that the proposed NUNOCS representation is object-centric and thus agnostic to specific camera parameters or viewpoints.…”
Section: A Category-level Canonical Nunocs Representationmentioning
confidence: 94%
“…Compared to the original NOCS representation [11], which recovers a 7D pose of novel object instances, the proposed NUNOCS allows to scale independently in each dimension when converting to the canonical space. Therefore, more fine-grained dense correspondence across object instances can be established via measuring their similarity (𝐿 2 distance in our case) in C. This is especially the case for instances with dramatically different 3D scales, as shown in the wrapped figure, where colors indicate correspondence similarity in C. A key difference from another related work on VD-NOC [44], which directly normalizes the scanned point cloud in the camera frame, is that the proposed NUNOCS representation is object-centric and thus agnostic to specific camera parameters or viewpoints.…”
Section: A Category-level Canonical Nunocs Representationmentioning
confidence: 94%
“…Closely related to our work, some methods transfer grasps to other objects by building dense correspondence. DGCM-Net [19] transfers grasps by predicting view-dependent normalized object coordinate (VD-NOC) values between pairs of depth images. CaTGrasp [29] maps the input point cloud to a Non-Uniform Normalized Object Coordinate Space (NUNOCS) where the spatial correspondence is built.…”
Section: Model-based Methodsmentioning
confidence: 99%
“…As a result, the grasp can be transformed from the same category's object. DGCM-Net [24] adopted the network to learn a reliable grasp, and then transferred the grasp to unseen objects in the same category. In order to transfer the grasp, both methods need to detect the object and separate the point cloud of the object from the depth map first.…”
Section: ) Transferring Grasps From Existing Onesmentioning
confidence: 99%