2010
DOI: 10.1007/s12204-010-1055-6
|View full text |Cite
|
Sign up to set email alerts
|

A numerical approach to trajectory planning for yoyo movement

Abstract: Based on nonlinear trajectory generation (NTG) software package, a general approach (i.e. numerical solution) to trajectory planning for yoyo motion is presented. For the real-time control of such periodical dynamic system, a critical problem is how to implement fast solving the optimal trajectory, so as to meet the real-time demand. However, traditional numerical solution methods are very time-consuming. In this paper, the optimization problem is solved by mapping the problem to a lower-dimension space. And c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
10
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 9 publications
1
10
0
Order By: Relevance
“…Therefore, many methods can be applied to solve the problem, e.g., primal-dual interior point methods, second-order cone program and Newton methods for logarithm barrier problem (Yuan 1993). An outline of solving the l 1 -norm minimization problem by linear programming is given in Appendix B.…”
Section: Comparison With the Linear Programming Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, many methods can be applied to solve the problem, e.g., primal-dual interior point methods, second-order cone program and Newton methods for logarithm barrier problem (Yuan 1993). An outline of solving the l 1 -norm minimization problem by linear programming is given in Appendix B.…”
Section: Comparison With the Linear Programming Methodsmentioning
confidence: 99%
“…, n and L i for all i form an n × (n + 1) matrix. For solving the above minimization problem, one may try Gauss-Newton method (Yuan 1993) since the gradient and Hessian can be easily computed. For simplicity of notation, we set ψ(r ) := 1 2 K r − d δ 2 l 2 .…”
Section: Smooth and Nonsmooth Hybrid Regularizationmentioning
confidence: 99%
“…For example, for the classical steepest descent (SD) method [21], the stepsize k is chosen such that J [x] is minimized along the line x k À k grad k [J ], that is,…”
Section: Projected Gradient Methodsmentioning
confidence: 99%
“…However, the steepest descent method is slow in convergence and zigzagging after several iterations [42]. The poor behavior is due to the optimal choice of the step size and not to the choice of the steepest descent direction g k .…”
Section: Iterative Regularization: Non-monotone Gradient Iterationmentioning
confidence: 99%
“…This figure vividly shows Iterative steps Log scale of the objective function us the nonmonotonicity and speediness of the non-monotone gradient descent method. It is well known that the gradient descent method and the conjugate gradient method possess linear convergence rate [42]. And with proceeding of iterations, zigzagging phenomenon would occur for these two methods [33,34].…”
Section: Layered Velocity Modelmentioning
confidence: 99%