2012
DOI: 10.1007/s10898-012-9910-7
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of a class of penalty methods for constrained scalar set-valued optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…Proof 1 Using 7∆G T k,j ∆u k = ∆u T k Θ k+1,j ∆u k , and if Θ k+1,j 0 then ∆G T k,j ∆u k > 0 and necessity is established. Using (11), there exists an arbitrary vector ζ j = 0 such that…”
Section: Control Law Designmentioning
confidence: 99%
See 1 more Smart Citation
“…Proof 1 Using 7∆G T k,j ∆u k = ∆u T k Θ k+1,j ∆u k , and if Θ k+1,j 0 then ∆G T k,j ∆u k > 0 and necessity is established. Using (11), there exists an arbitrary vector ζ j = 0 such that…”
Section: Control Law Designmentioning
confidence: 99%
“…Moreover, solving the nonlinear equations defining the entries in the associated Hessian matrix is not required, which greatly reduces the computation and improves the efficiency. Previous work in the non-ILC literature has developed a penalty function method for a class of constrained optimization problems together with convergence analysis, see, e.g., [11].…”
Section: Introductionmentioning
confidence: 99%
“…If it is not the case, then there exists a subsequence of {( , , )}, which can be still as {( , , )} without loss of generality, such that for each , ( , ) is the local minimizer of ( ) with finite ( , ), and ̸ = 0, when → ∞, we have → +∞ and ( , ) → ( * , * ). By (9) we know that…”
Section: A Modified Simple Exact Penalty Function For Equality Constrmentioning
confidence: 99%
“…There are some nonsmooth penalty functions for nonsmooth optimization problems, such as the exact penalty function using the distance function for the nonsmooth variational inequality problem in Hilbert spaces [8]. In [9], the convergence of lower-order exact penalization for a constrained scalar set-valued optimization problem is given under sufficient conditions which are easy to verify.…”
Section: Introductionmentioning
confidence: 99%
“…However, this recent line of research does not just offer a promising avenue for establishing a thorough mathematical framework for understanding the numerically observed successes of CBO methods [9,11,[15][16][17], but beyond that allows to explain the effective use of conceptually similar and widespread methods such as PSO as well as at first glance completely different optimisation algorithms such as stochastic gradient descent (SGD). While the first connection is to be expected and by now made fairly rigorous [18][19][20] due to CBO indisputably taking PSO as inspiration, the second observation is somewhat surprising, as it builds a bridge between derivative-free metaheuristics and gradient-based learning algorithms. Despite CBO solely relying on evaluations of the objective function, recent work [21] reveals an intrinsic SGD-like behaviour of CBO itself by interpreting it as a certain stochastic relaxation of gradient descent, which provably overcomes energy barriers of non-convex function.…”
Section: Introductionmentioning
confidence: 99%