2021
DOI: 10.1177/17298814211055577
|View full text |Cite
|
Sign up to set email alerts
|

Grasp detection via visual rotation object detection and point cloud spatial feature scoring

Abstract: Accurately detecting the appropriate grasp configurations is the central task for the robot to grasp an object. Existing grasp detection methods usually overlook the depth image or only regard it as a two-dimensional distance image, which makes it difficult to capture the three-dimensional structural characteristics of target object. In this article, we transform the depth image to point cloud and propose a two-stage grasp detection method based on candidate grasp detection from RGB image and spatial feature r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…Recent grasping methods are evolving towards 6D pose estimation, such as [18], where a method and dataset use RGB or RGB-D to segment, detect, and estimate an object's 6D pose. The segmentation and point cloud generation of objects in RGB and RGB-D [19], and an improvement in the 6D estimation of moving objects can be seen in [20,21], respectively. This paper proposes a new selective grasping system in 6D using only point clouds from RGB-D sensors.…”
Section: Introductionmentioning
confidence: 99%
“…Recent grasping methods are evolving towards 6D pose estimation, such as [18], where a method and dataset use RGB or RGB-D to segment, detect, and estimate an object's 6D pose. The segmentation and point cloud generation of objects in RGB and RGB-D [19], and an improvement in the 6D estimation of moving objects can be seen in [20,21], respectively. This paper proposes a new selective grasping system in 6D using only point clouds from RGB-D sensors.…”
Section: Introductionmentioning
confidence: 99%