2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9197548
|View full text |Cite
|
Sign up to set email alerts
|

GOMP: Grasp-Optimized Motion Planning for Bin Picking

Abstract: High-speed motions in pick-and-place operations are critical to making robots cost-effective in many automation scenarios, from warehouses and manufacturing to hospitals and homes. However, motions can be too fast-such as when the object being transported has an open-top, is fragile, or both. One way to avoid spills or damage, is to move the arm slowly. We propose Grasp-Optimized Motion Planning for Fast Inertial Transport (GOMP-FIT), a time-optimizing motion planner based on our prior work, that includes cons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

4
5

Authors

Journals

citations
Cited by 38 publications
(14 citation statements)
references
References 64 publications
0
14
0
Order By: Relevance
“…2, we compare the shifted geometric mean of solving 10 problems of 20 different dimension, for a total of 200 runs per class per solver. The problem dimensions for Control, Huber, SVM, Lasso are (10,11,12,13,14,16,17,20,23,26,31,37,45,55,68,84,105,132,166,209); for Random and Eq are (10,11,12,13,15,18,23,29,39,53,73,103,146,211,304 Figure 3: Solve time with increasing dimension on the Random QP problem set. We train and benchmark two vector RL adaptation policies: (dashed) on problems ranging from dimension 10 to 50, and (solid) on problems ranging from 10 to 2000.…”
Section: Multi-task/general Rlqp Policymentioning
confidence: 99%
See 1 more Smart Citation
“…2, we compare the shifted geometric mean of solving 10 problems of 20 different dimension, for a total of 200 runs per class per solver. The problem dimensions for Control, Huber, SVM, Lasso are (10,11,12,13,14,16,17,20,23,26,31,37,45,55,68,84,105,132,166,209); for Random and Eq are (10,11,12,13,15,18,23,29,39,53,73,103,146,211,304 Figure 3: Solve time with increasing dimension on the Random QP problem set. We train and benchmark two vector RL adaptation policies: (dashed) on problems ranging from dimension 10 to 50, and (solid) on problems ranging from 10 to 2000.…”
Section: Multi-task/general Rlqp Policymentioning
confidence: 99%
“…Many applications in control [26] and optimization [27] require QPs from the same class to be repeatedly solved. To test if training a policy specific to a QP class can outperform a policy trained on the benchmark suite, we train policies specific to the problems generated by the trust-region [8] based solver for sequential quadratic program (SQP) from a grasp-optimized motion planner (GOMP) [26,25] for robots. With these problems, RLQP trained on the benchmarks converges more slowly than the handcrafted policy included in OSQP.…”
Section: Training a Class-specific Policymentioning
confidence: 99%
“…In this section, we describe a trajectory time-optimization to improve the speed of task performance. In prior formulations 60,61 of pick-and-place trajectory optimization, a timeminimized trajectory is found by discretizing the trajectory into a sequence of waypoints and formulating a sequential quadratic program that minimizes the sum-of-squaredacceleration or sum-of-squared distance between the waypoints. We observe that this prior formulation, while versatile enough for the peg transfer tasks, can be simplified to minimizing a sequence of splines defined by up to four waypoints, and relying on the kinematic design of the robot to avoid collisions.…”
Section: Non-linear Trajectory Optimizationmentioning
confidence: 99%
“…The objective of the QP minimizes jerk. To make the trajectory as fast as possible, in a manner similar to GOMP [10], we repeatedly reduce H by 1 until the QP is infeasible, and use the shortest feasible trajectory. The QP takes the following form:…”
Section: Min Jerk Trajectory Generation Functionmentioning
confidence: 99%