2015
DOI: 10.1016/j.jmaa.2014.10.048
|View full text |Cite
|
Sign up to set email alerts
|

Linear convergence of a type of iterative sequences in nonconvex quadratic programming

Abstract: By using error bounds for affine variational inequalities we prove that any iterative sequence generated by the Projection DC (Difference-of-Convex functions) decomposition algorithm in quadratic programming is R-linearly convergent, provided that the original problem has solutions. Our result solves in the affirmative the first part of the conjecture stated by Le Thi, Pham Dinh and Yen in their recent paper [8, p. 489].

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…We choose a natural number N and define the mesh size . Since the optimal control is assumed to be bang–bang, we identify the discretized control with its piece-wise constant extension: Moreover, we identify the discretized state and costate with its piece-wise linear interpolations and The Euler discretization of ( 1.1 ) is given by where is the cost function defined by Observe that ( ) is a quadratic optimization problem over a polyhedral convex set, where the gradient projection method converges linearly, see e.g., [ 30 ]. This means that for each N , there exists such that In the following examples, we will consider various values of N which suggest that This will confirm the sublinear rate obtained in Theorem 3.2 .…”
Section: Numerical Illustrationsmentioning
confidence: 99%
“…We choose a natural number N and define the mesh size . Since the optimal control is assumed to be bang–bang, we identify the discretized control with its piece-wise constant extension: Moreover, we identify the discretized state and costate with its piece-wise linear interpolations and The Euler discretization of ( 1.1 ) is given by where is the cost function defined by Observe that ( ) is a quadratic optimization problem over a polyhedral convex set, where the gradient projection method converges linearly, see e.g., [ 30 ]. This means that for each N , there exists such that In the following examples, we will consider various values of N which suggest that This will confirm the sublinear rate obtained in Theorem 3.2 .…”
Section: Numerical Illustrationsmentioning
confidence: 99%
“…The first aim of the present paper is to prove that any DCA sequence generated by Algorithm B converges R-linearly to a KKT point. Hence, combining this with Theorem 2.1 from [26], we have a complete solution for the Conjecture in [12, p. 489]. Our result is obtained by applying some arguments of [26] and a new technique in dealing with implicitly defined DCA sequences.…”
Section: Introductionmentioning
confidence: 68%
“…Hence, combining this with Theorem 2.1 from [26], we have a complete solution for the Conjecture in [12, p. 489]. Our result is obtained by applying some arguments of [26] and a new technique in dealing with implicitly defined DCA sequences.…”
Section: Introductionmentioning
confidence: 68%
See 1 more Smart Citation
“…In our second experiment we considered matrices of the form Q μ n as defined in (32). In order to generate hard instances (those which are close to be copositive) we took μ := 1.9.…”
Section: Methodsmentioning
confidence: 99%