In this article, we discuss a special class of time-optimal control problems for dynamic systems, where the final state of a system lies on a hyper-surface. In time domain, this end point constraint may be given by a scalar equation, which we call transversality condition. It is well known that such problems can be transformed to a two-point boundary value problem which is usually hard to solve, and requires an initial guess close to the optimal solution. Hence, we propose a new gradient based iterative solution strategy instead, where the gradient of the cost functional, i. e. of the final time is computed with the adjoint method. Two formulations of the adjoint method are presented in order to solve such control problems. First, we consider a hybrid approach, where the state equations and the adjoint equations are formulated in time domain but the controls and the gradient formula are transformed to a spatial variable with fixed boundaries. Second, we introduce an alternative approach, in which we carry out a complete elimination of the time coordinate and utilize a formulation in the space domain. Both approaches are robust with respect to poor initial controls and yield a shorter final time and, hence, an improved control after every iteration. The presented method is tested with two classical examples from satellite and vehicle dynamics. However, it can also be extended to more complex systems, which are used in industrial applications.
Within the framework of this article, we pursue a novel approach for the determination of time-optimal controls for dynamic systems under observance of end conditions. Such problems arise in robotics, e.g., if the control of a robot has to be designed such that the time for a rest-to-rest maneuver becomes a minimum. So far, such problems have been generally considered as two-point boundary value problems, which are hard to solve and require an initial guess close to the optimal solution. The aim of this work is the development of an iterative, gradient-based solution strategy, which can be applied even to complex multibody systems. The so-called adjoint method is a promising way to compute the direction of the steepest descent, i.e., the variation of a control signal causing the largest local decrease of the cost functional. The proposed approach will be more robust than solving the underlying boundary value problem, as the cost functional will be minimized iteratively while approaching the final conditions. Moreover, so-called influence differential equations are formulated to relate the changes of the controls and of the final conditions. In order to meet the end conditions, we introduce a descent direction that, on the one hand, approaches the optimum of the constrained cost functional and, on the other hand, reduces the error in the prescribed final conditions.
This article illustrates a novel approach for the determination of time-optimal controls for dynamic systems under observance of end conditions. Such problems arise in robotics, e.g. if the control of a robot has to be designed such that the time for a rest-to-rest maneuver becomes a minimum. So far, such problems have been considered as two-point boundary value problems, which are hard to solve and require an initial guess close to the optimal solution. The aim of this contribution is the development of an iterative, gradient based solution strategy for solving such problems. As an example, a Moon-landing as in the Apollo program, will be considered. In detail, we discuss the ascent, descent and abort maneuvers of the Apollo Lunar Excursion Module (LEM) to and from the Moon’s surface in minimum time. The goal is to find the control of the thrust nozzle of the LEM to minimize the final time.
In this paper, we discuss time-optimal control problems for dynamic systems. Such problems usually arise in robotics when a manipulation should be carried out in minimal operation time. In particular, for time-optimal control problems with a high number of control parameters, the adjoint method is probably the most efficient way to calculate the gradients of an optimization problem concerning computational efficiency. In this paper, we present an adjoint gradient approach for solving time-optimal control problems with a special focus on a discrete control parameterization. On the one hand, we provide an efficient approach for computing the direction of the steepest descent of a cost functional in which the costs and the error in the final constraints reduce within one combined iteration. On the other hand, we investigate this approach to provide an exact gradient for other optimization strategies and to evaluate necessary optimality conditions regarding the Hamiltonian function. Two examples of the time-optimal trajectory planning of a robot demonstrate an easy access to the adjoint gradients and their interpretation in the context of the optimality conditions of optimal control solutions, e.g., as computed by a direct optimization method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.