This paper analyzes the approximate augmented Lagrangian dynamical systems for constrained optimization. We formulate the differential systems based on first derivatives and second derivatives of the approximate augmented Lagrangian. The solution of the original optimization problems can be obtained at the equilibrium point of the differential equation systems, which lead the dynamic trajectory into the feasible region. Under suitable conditions, the asymptotic stability of the differential systems and local convergence properties of their Euler discrete schemes are analyzed, including the locally quadratic convergence rate of the discrete sequence for the second derivatives based differential system. The transient behavior of the differential equation systems is simulated and the validity of the approach is verified with numerical experiments.
This note provides a counterexample to demonstrate some flaws in a recent paper by Chen et al. It pinpoints the logic error in the proof of Theorem 5.2 and discusses some remedial works.
We develop a cell-average-based neural network (CANN) method to compute nonlinear differential equations. Using feedforward networks, we can train average solutions from t0
+ Δt with initial values. In order to find the optimal parameters for the network, in combination with supervised training, we use a BP algorithm. By the trained network, we may compute the approximate solutions at the time t
n+1 with the ones at time tn
. Numerical results show CANN method permits a very large time step size for solution evolution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.