2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967685
|View full text |Cite
|
Sign up to set email alerts
|

GlassLoc: Plenoptic Grasp Pose Detection in Transparent Clutter

Abstract: Transparent objects are prevalent across many environments of interest for dexterous robotic manipulation. Such transparent material leads to considerable uncertainty for robot perception and manipulation, and remains an open challenge for robotics. This problem is exacerbated when multiple transparent objects cluster into piles of clutter. In household environments, for example, it is common to encounter piles of glassware in kitchens, dining rooms, and reception areas, which are essentially invisible to mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…Grasp candidates are then generated at regular orientations orthogonal to the curvature axis. Zhou et al ( 2019 ) samples the grasping candidates based on the depth descriptor Depth Likelihood Volume (Zhou Z. et al, 2018 ).…”
Section: Grasping Candidate Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…Grasp candidates are then generated at regular orientations orthogonal to the curvature axis. Zhou et al ( 2019 ) samples the grasping candidates based on the depth descriptor Depth Likelihood Volume (Zhou Z. et al, 2018 ).…”
Section: Grasping Candidate Generationmentioning
confidence: 99%
“…Gualtieri et al ( 2017 ) and Zhang and Demiris ( 2020 ) propose robot systems to assist disabled people grab objects and dress cloths. Llopart et al ( 2017 ), Zhou et al ( 2019 ), Yang et al ( 2020 ) and Zeng et al ( 2020 ) aim to learn grasping capability to accomplish opening doors, grabbing glasses, picking objects from human's hands and throwing arbitrary objects. More interestingly, Parhar et al ( 2018 ), Guo N. et al ( 2020 ), and Kang et al ( 2020 ) utilizes robot grasp ability to help completing crops harvesting in the farm.…”
Section: Applicationsmentioning
confidence: 99%
“…For example, transparent object pose can be estimated through a monocular color camera [43,44], but the translation estimation along the z-axis tends to be inaccurate due to lack of 3D depth information. Stereo camera [45,46], light field camera [47], single pixel camera [48], and microscope-camera system [49] can be used for object pose estimation, but these works are very different from this paper and are not discussed further.…”
Section: Related Workmentioning
confidence: 91%
“…In the computer vision community, significant advances have been made in object detection and segmentation [1]- [6], hand detection [7], [8], hand pose estimation [9]- [12] and its state recognition [13], and human behavior recognition [14]- [20], which are potential sources of useful information in human-robot interaction scenarios. For example, grasping [21]- [25], pickingand-placing [26], stable human pose estimation [27], and stable object detection/discovery [28]- [37] are several topics of interest in HRI scenarios. Recent approaches are pushing the boundaries of HRI by continuously adopting advanced methods of individual vision modules to understand human behaviours, especially in cooperative [38] and safe [39] HRI.…”
Section: Introductionmentioning
confidence: 99%