2021
DOI: 10.3389/fnbot.2021.719731
|View full text |Cite
|
Sign up to set email alerts
|

Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping

Abstract: To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…Simply inferring grasps without considering the underlying object arrangement can result in unsuccessful grasp attempts or even damaged objects. Recent work addresses this problem by attempting to learn the manipulation order for a picking system [27], [41]. However, currently only a few datasets are available that provide the necessary scene layout information [6], [13] (cf.…”
Section: Object Detection and Relationship Reasoningmentioning
confidence: 99%
“…Simply inferring grasps without considering the underlying object arrangement can result in unsuccessful grasp attempts or even damaged objects. Recent work addresses this problem by attempting to learn the manipulation order for a picking system [27], [41]. However, currently only a few datasets are available that provide the necessary scene layout information [6], [13] (cf.…”
Section: Object Detection and Relationship Reasoningmentioning
confidence: 99%
“…Recently, graph neural networks (GNNs) are widely proved to be efficient for detecting relationships among objects [251,277]. [284] proposed a GNN-based method for VMR detection and achieved better performance. Such kind of methods has been successfully applied to the decision of grasping sequence in dense-clutter scenes [182,267,272].…”
Section: Relational Grasp Synthesismentioning
confidence: 99%
“…A construction scene graph contains not only intuitive visual information, but also deep semantic information. Scene graphs are one of the methods used to construct the visual relations of images [23]. The main idea is to divide the visual relations between all objects in an image into a triadic subject-predicate-object form, which is used as a whole learning task [24,25].…”
Section: Semantic Conversion Module Based On Frequency Baselinementioning
confidence: 99%