2019
DOI: 10.1186/s13661-019-1235-8
|View full text |Cite
|
Sign up to set email alerts
|

New proof on exponential convergence for cellular neural networks with time-varying delays

Abstract: In this paper, we deal with a class of cellular neural networks with time-varying delays. Applying differential inequality strategies without assuming the boundedness conditions on the activation functions, we obtain a new sufficient condition that ensures that all solutions of the considered neural networks converge exponentially to the zero equilibrium point. We give an example to illustrate the effectiveness of the theoretical results. The results obtained in this paper are completely new and complement the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 38 publications
0
6
1
Order By: Relevance
“…In addition, an example and its numerical simulations are provided to validate the correctness of the theoretical results in this paper. In comparison to previous results presented in [21]- [28], results of this paper are less conservative and more general. Moreover, the theoretical results in this paper can be seen as a complement and an extension to the previous works [21]- [28].…”
Section: The Conclusioncontrasting
confidence: 60%
See 2 more Smart Citations
“…In addition, an example and its numerical simulations are provided to validate the correctness of the theoretical results in this paper. In comparison to previous results presented in [21]- [28], results of this paper are less conservative and more general. Moreover, the theoretical results in this paper can be seen as a complement and an extension to the previous works [21]- [28].…”
Section: The Conclusioncontrasting
confidence: 60%
“…But the intrinsic parameters of the non-autonomous NNs are variables and have input effect. Up to now, the majority of existing results are devoted to the autonomous NNs [12]- [26], and there are few papers considered the non-autonomous NNs [27,28]. In [27,28], the authors considered the following non-autonomous cellular neural networks(CNNs) with time variable delays and infinite delayṡ…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Here we do not assume that f j are Lipschitz and hypothesis (H1) only implies the continuity of f j at u = 0. Condition (H1) is assumed in [10] for discrete-time models and in [32] for continuous-time models. The most famous activation functions used in neural networks, such as linear ReLu (rectified linear unit), leaky ReLu, sigmoid, and tanh (hyperbolic tangent), verify hypothesis (H1).…”
Section: Remarkmentioning
confidence: 99%
“…Therefore, a nonautonomous scenario is necessary for NNs models, making it worthy to study the NNs models in a nonautonomous environment. Moreover, hitherto, most of the published works are devoted to the autonomous NNs , and there are few papers considered nonautonomous NNs [28,29]. For example, in [28,29], the authors considered the following nonautonomous cellular neural networks (CNNs) with time-variable delays and infinite delays…”
Section: Introductionmentioning
confidence: 99%