2017
DOI: 10.1109/tcyb.2016.2567449
|View full text |Cite
|
Sign up to set email alerts
|

A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints

Abstract: Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neura… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 107 publications
(18 citation statements)
references
References 44 publications
0
18
0
Order By: Relevance
“…It is worth noting that the property of entering one of the constraints or feasible region is possessed by many continuous-time algorithms for solving optimization problems, such as [8,11,19,34]. This proposition guarantees the continuous-time algorithm (17) which has the advantages of simplifying the distributed optimization problems by ignoring one of the constraints once the states enter this set.…”
Section: Remarkmentioning
confidence: 99%
See 1 more Smart Citation
“…It is worth noting that the property of entering one of the constraints or feasible region is possessed by many continuous-time algorithms for solving optimization problems, such as [8,11,19,34]. This proposition guarantees the continuous-time algorithm (17) which has the advantages of simplifying the distributed optimization problems by ignoring one of the constraints once the states enter this set.…”
Section: Remarkmentioning
confidence: 99%
“…Remark 2 Penalty method is a widely adopted approach in solving optimization problems with sorts of constraints, such as [19,28] and so on. By introducing penalty parameters, one can reduce the dimension of related algorithms.…”
Section: Theorem 1 Suppose That Assumptions 1-3 Hold Ifmentioning
confidence: 99%
“…Content may change prior to final publication. With (5) and (10), the six-instant DTVMI algorithm is named as DTVMI-III algorithm, which is given as follows:…”
Section: Six-instant Zead Formulamentioning
confidence: 99%
“…Methods based on RNN have manifested their high-speed parallelprocessing nature and convenience of hardware implementation [5], [6]. They have been motivated as analog machines to solve optimization problems [7]- [10]. In 2001, aiming at the online solution of various time-variant problems, a new class of RNN, termed Zhang neural network (ZNN), has been proposed in [11].The ZNN model is essentially based on an indefinite function termed Zhang function (ZF) which is served as error-monitoring.…”
Section: Introductionmentioning
confidence: 99%
“…As we know the convergence performance of a RNN can be improved significantly by selecting proper activation function, and several novel activation functions are presented in recent year [19], [20]. Unlike the thought of developing activation functions, this paper focuses on improving the ZNN model to accelerate its convergence performance.…”
Section: Two Ftrnn Modelsmentioning
confidence: 99%