2017
DOI: 10.48550/arxiv.1710.01330
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 30 publications
0
12
0
Order By: Relevance
“…For example, some designs used two or more grippers in one robotic hand [4]. The grippers can be fixed to tuning turrets [5], or they can have one or more Degree of Freedoms (DoFs) relative to each other [6], [7]. Some other designs used fully actuated [8] or underactutated anthropomorphic hands [9], [10], [11].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, some designs used two or more grippers in one robotic hand [4]. The grippers can be fixed to tuning turrets [5], or they can have one or more Degree of Freedoms (DoFs) relative to each other [6], [7]. Some other designs used fully actuated [8] or underactutated anthropomorphic hands [9], [10], [11].…”
Section: Introductionmentioning
confidence: 99%
“…Some other designs used fully actuated [8] or underactutated anthropomorphic hands [9], [10], [11]. Specifically, Zeng et al [6] developed a gripper with a retractable mechanism to allow switching between a parallel gripper and a suction gripper. Cannella et al [12] and Chen et al [13] developed industrial grippers with twisting ability for high-speed assembly.…”
Section: Introductionmentioning
confidence: 99%
“…Multiple, RGBD images across space can also be integrated to produce such explicit representations [31]. The latter approach is often used to obtain a 3D scene representation in grasping tasks ( [24], [25]). In contrast to these methods, neural-based algorithms learn implicit representations of a scene.…”
Section: B Multiple View Object and Scene Representation Learningmentioning
confidence: 99%
“…However, these methods tend to produce average grasps which are invalid for certain symmetric objects [2]. Recently, methods such as [4], [5], [13]- [15] used auto-encoders to generate grasp poses at every pixel. They demonstrated higher grasp accuracy compared to the global methods.…”
Section: Related Workmentioning
confidence: 99%
“…Other methods such as [3] focused on learning grasps at patch-level by extracting patches (of different sizes) from the image and predicting a grasp for each patch. Recently, methods such as [4], [5] used auto-encoders to learn grasp parameters at each pixel in the image. They showed that one-to-one mapping (of image data to ground truth grasps) at the pixel-level can effectively be learnt using small CNN structures to achieve fast inference speed.…”
Section: Introductionmentioning
confidence: 99%