2015 American Control Conference (ACC) 2015
DOI: 10.1109/acc.2015.7170879
|View full text |Cite
|
Sign up to set email alerts
|

A switched dynamical system framework for analysis of massively parallel asynchronous numerical algorithms

Abstract: In the near future, massively parallel computing systems will be necessary to solve computation intensive applications. The key bottleneck in massively parallel implementation of numerical algorithms is the synchronization of data across processing elements (PEs) after each iteration, which results in significant idle time. Thus, there is a trend towards relaxing the synchronization and adopting an asynchronous model of computation to reduce idle time. However, it is not clear what is the effect of this relaxa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
17
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 14 publications
(18 citation statements)
references
References 23 publications
1
17
0
Order By: Relevance
“…One of the biggest concerns is the notorious scalability problem (also known as the curse of dimensionality problem) that causes computational intractability, due to extremely large numbers of switching modes η. In [20], Lee et al first addressed this issue, which is briefly explained as follows.…”
Section: Stability Analysis and Control Algorithmmentioning
confidence: 99%
“…One of the biggest concerns is the notorious scalability problem (also known as the curse of dimensionality problem) that causes computational intractability, due to extremely large numbers of switching modes η. In [20], Lee et al first addressed this issue, which is briefly explained as follows.…”
Section: Stability Analysis and Control Algorithmmentioning
confidence: 99%
“…2) Parallel computing for fixed-point iteration -Parallel computing is widely used technique to speedup computation of fixed-point iterations. For example, in [22] and [23], onedimensional heat equation using a finite difference method is solved by parallel computing in which a certain group of grid points for the finite difference scheme is assigned to each CPU or GPU core. Thus, in parallel computing each value for the group of grid points is computed by different cores, followed by communication between cores for updating values at the boundary grid points of the group.…”
Section: Consensus Critical Applicationsmentioning
confidence: 99%
“…For decades, it has been reported that computing performance in parallel computation can deteriorate due to the synchronization penalty necessarily accompanied by parallel implementation of the given numerical scheme. Thus, there is a trend to relax this synchronization latency by adopting alternative approaches and techniques such as relaxed synchronization [1,2] or asynchronous parallel computing algorithm [3,4,5,6,7,8]. Although the asynchronous parallel computing algorithm has arisen to overcome the synchronization bottleneck, and hence speed up the computation, the randomness of asynchrony incurs unpredictability of the solution, which in turn leads to numerical inaccuracy of the solution or even instability in the worst case.…”
Section: Introductionmentioning
confidence: 99%
“…In [7], we have developed mathematical proofs for stability, rate of convergence, and error probability of asynchronous 1D heat equation via dynamical system framework (especially, the switched system framework [9,10]). All the results in this note are based on our previous research works [7]. Thus, this note aims at testing asynchronous scheme in 1D heat equation with CUDA rather than developing theory and proof.…”
Section: Introductionmentioning
confidence: 99%