2022
DOI: 10.48550/arxiv.2210.06575
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…NeRF predicts the RGB color and density of a point in a scene so that an image from an arbitrary viewpoint can be rendered. This property enables pose estimation [1,30,31,44] based on the photometric loss between the observed image and the rendered image, or manipulation of tricky objects [5,12,14,25,29]. A pretrained NeRF can also work as a virtual simulator, in which a robot can plan its trajectory [1] or can be used to train an action policy for the real-world [6].…”
Section: Related Workmentioning
confidence: 99%
“…NeRF predicts the RGB color and density of a point in a scene so that an image from an arbitrary viewpoint can be rendered. This property enables pose estimation [1,30,31,44] based on the photometric loss between the observed image and the rendered image, or manipulation of tricky objects [5,12,14,25,29]. A pretrained NeRF can also work as a virtual simulator, in which a robot can plan its trajectory [1] or can be used to train an action policy for the real-world [6].…”
Section: Related Workmentioning
confidence: 99%
“…However, these approaches often require retraining the NeRF model before each grasp to update the environment states. GraspN-eRF [20] addresses this constraint by proposing a generalisable NeRF that is free from per-scene optimisation. Nevertheless, GraspNeRF is not object-centric and thus cannot interpret the scene at the object level.…”
Section: Related Workmentioning
confidence: 99%
“…Representatives are various glass objects, e.g., glass walls or glass flasks, and transparent or translucent plastic parts, e.g., clear orthodontic aligners or car headlights. Typical areas of application are as follows: (a) human-robot interactions, e.g., for confidential detection of visually uncooperative objects [1]; (b) autonomous robot navigation, e.g., collision prevention of glass walls; (c) laboratory automation, e.g., for grasping visually uncooperative objects [2][3][4][5]; (d) medical section, e.g., 3D reconstruction of clear orthodontic aligners; (e) autonomous waste sorting and recycling, and (f) augmented reality [6]. In these use cases, there are two main tasks:…”
Section: Introductionmentioning
confidence: 99%
“…• Accurately estimating the depth of optically uncooperative objects. This includes accurate and reliable depth estimates, also known as deep depth completion [2,[12][13][14], 3D reconstruction methods [3,15,16], and stereo vision [17][18][19].…”
mentioning
confidence: 99%