2018 IEEE Conference on Decision and Control (CDC) 2018
DOI: 10.1109/cdc.2018.8619430
|View full text |Cite
|
Sign up to set email alerts
|

Asynchronous and Distributed Tracking of Time-Varying Fixed Points

Abstract: This paper develops an algorithmic framework for tracking fixed points of time-varying contraction mappings. Analytical results for the tracking error are established for the cases where: (i) the underlying contraction self-map changes at each step of the algorithm; (ii) only an imperfect information of the map is available; and, (iii) the algorithm is implemented in a distributed fashion, with communication delays and packet drops leading to asynchronous algorithmic updates. The analytical results are applica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(16 citation statements)
references
References 24 publications
0
16
0
Order By: Relevance
“…We next analyze the proposed algorithm under the assumption of synchronous updates in steps [S1] and [S2] above. The analysis of the asynchronous case can be carried out similarly on expense of heavier notation and further assumptions; see for example [45].…”
Section: Performance Analysismentioning
confidence: 99%
“…We next analyze the proposed algorithm under the assumption of synchronous updates in steps [S1] and [S2] above. The analysis of the asynchronous case can be carried out similarly on expense of heavier notation and further assumptions; see for example [45].…”
Section: Performance Analysismentioning
confidence: 99%
“…Zero-order (or gradient-free) optimization has been a subject of interest in the optimization, control, and machine learning communities for decades. The seminal paper of Kiefer and Wolfowitz [25] introduced a one-dimensional variant of approximation (6); for a d-dimensional problem, it perturbs each dimension separately and requires 2d function evaluations. The simultaneous perturbation stochastic approximation (SPSA) algorithm [34] uses zero-mean independent random perturbations, requiring two function evaluations at each step.…”
Section: Literature Reviewmentioning
confidence: 99%
“…It is particularly true for fast communication and control. In fact, the asynchronicity can be modeled as an additional noise source, and can be analyzed similarly to, e.g., [6]. In the next section, we numerically evaluate the sensitivity of our approach to different levels of noise.…”
Section: Distributed Algorithm Implementationmentioning
confidence: 99%
“…This paper is also closely related to a growing body of research on feedback optimization, which often includes time-varying optimization problems. Relevant work develops prediction-correction algorithms for problems with timevarying objective functions in both the centralized [25], [26] and distributed cases [27], [28]. Specifically, [27] considers an objective function that is the sum of locally available functions at each node and uses higher-order information to perform the prediction step.…”
Section: Introductionmentioning
confidence: 99%
“…In [28,Remark 3], it is noted that distributed time-varying optimization by agents with different computational abilities is subject to ongoing research. To the best of our knowledge, this problem remains open, and we present a solution to it by allowing agents to both compute and communicate totally asynchronously.…”
Section: Introductionmentioning
confidence: 99%