2021 IEEE International Conference on Robotics and Automation (ICRA) 2021
DOI: 10.1109/icra48506.2021.9561516
|View full text |Cite
|
Sign up to set email alerts
|

Object Rearrangement Using Learned Implicit Collision Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
50
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 45 publications
(50 citation statements)
references
References 36 publications
0
50
0
Order By: Relevance
“…However, setting up a real-world TAMP system often requires substantial task-specific knowledge and accurate 3D models of the environment, significantly limiting the environments to which the system can generalize. To address this challenge, recent work has adopted deep learning-based approaches for robotic manipulation, for instance, on grasp planning [44,47,48,62,65], motion planning [7,57], and reasoning about spatial relations [20,36,49].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, setting up a real-world TAMP system often requires substantial task-specific knowledge and accurate 3D models of the environment, significantly limiting the environments to which the system can generalize. To address this challenge, recent work has adopted deep learning-based approaches for robotic manipulation, for instance, on grasp planning [44,47,48,62,65], motion planning [7,57], and reasoning about spatial relations [20,36,49].…”
Section: Related Workmentioning
confidence: 99%
“…The planner classifies each relative transform as feasible if the object at the predicted transform is not colliding with any other objects in the scene. We used pre-trained SceneCollisionNet [7] to check the collision of the object at the predicted transformation. The feasible objects are then ranked based on score S = |r| + λ|t|, where r is the relative rotation transformation in radians, t is the relative translation in cm and λ = 0.2 in our experiments.…”
Section: Planning and Executionmentioning
confidence: 99%
See 1 more Smart Citation
“…If the number of predicted grasps for an object is too low, we reduce the confidence threshold to 0.19. In the end we execute the most confident grasp that is kinematically reachable and where the robot does not collide with the scene [38].…”
Section: A Inferencementioning
confidence: 99%
“…In our experiments, we use the Intel Realsense L515 LiDAR camera mounted on a tripod for both RGB and depth data. Robot motions are generated using [38].…”
Section: Real Robot Grasp Experimentsmentioning
confidence: 99%